A Doppler centroid estimation algorithm for SAR systems optimized for the quasi-homogeneous source
NASA Technical Reports Server (NTRS)
Jin, Michael Y.
1989-01-01
Radar signal processing applications frequently require an estimate of the Doppler centroid of a received signal. The Doppler centroid estimate is required for synthetic aperture radar (SAR) processing. It is also required for some applications involving target motion estimation and antenna pointing direction estimation. In some cases, the Doppler centroid can be accurately estimated based on available information regarding the terrain topography, the relative motion between the sensor and the terrain, and the antenna pointing direction. Often, the accuracy of the Doppler centroid estimate can be improved by analyzing the characteristics of the received SAR signal. This kind of signal processing is also referred to as clutterlock processing. A Doppler centroid estimation (DCE) algorithm is described which contains a linear estimator optimized for the type of terrain surface that can be modeled by a quasi-homogeneous source (QHS). Information on the following topics is presented: (1) an introduction to the theory of Doppler centroid estimation; (2) analysis of the performance characteristics of previously reported DCE algorithms; (3) comparison of these analysis results with experimental results; (4) a description and performance analysis of a Doppler centroid estimator which is optimized for a QHS; and (5) comparison of the performance of the optimal QHS Doppler centroid estimator with that of previously reported methods.
Optimal Doppler centroid estimation for SAR data from a quasi-homogeneous source
NASA Technical Reports Server (NTRS)
Jin, M. Y.
1986-01-01
This correspondence briefly describes two Doppler centroid estimation (DCE) algorithms, provides a performance summary for these algorithms, and presents the experimental results. These algorithms include that of Li et al. (1985) and a newly developed one that is optimized for quasi-homogeneous sources. The performance enhancement achieved by the optimal DCE algorithm is clearly demonstrated by the experimental results.
Doppler centroid estimation ambiguity for synthetic aperture radars
NASA Technical Reports Server (NTRS)
Chang, C. Y.; Curlander, J. C.
1989-01-01
A technique for estimation of the Doppler centroid of an SAR in the presence of large uncertainty in antenna boresight pointing is described. Also investigated is the image degradation resulting from data processing that uses an ambiguous centroid. Two approaches for resolving ambiguities in Doppler centroid estimation (DCE) are presented: the range cross-correlation technique and the multiple-PRF (pulse repetition frequency) technique. Because other design factors control the PRF selection for SAR, a generalized algorithm is derived for PRFs not containing a common divisor. An example using the SIR-C parameters illustrates that this algorithm is capable of resolving the C-band DCE ambiguities for antenna pointing uncertainties of about 2-3 deg.
Effects of window size and shape on accuracy of subpixel centroid estimation of target images
NASA Technical Reports Server (NTRS)
Welch, Sharon S.
1993-01-01
A new algorithm is presented for increasing the accuracy of subpixel centroid estimation of (nearly) point target images in cases where the signal-to-noise ratio is low and the signal amplitude and shape vary from frame to frame. In the algorithm, the centroid is calculated over a data window that is matched in width to the image distribution. Fourier analysis is used to explain the dependency of the centroid estimate on the size of the data window, and simulation and experimental results are presented which demonstrate the effects of window size for two different noise models. The effects of window shape were also investigated for uniform and Gaussian-shaped windows. The new algorithm was developed to improve the dynamic range of a close-range photogrammetric tracking system that provides feedback for control of a large gap magnetic suspension system (LGMSS).
Evaluation of centroiding algorithm error for Nano-JASMINE
NASA Astrophysics Data System (ADS)
Hara, Takuji; Gouda, Naoteru; Yano, Taihei; Yamada, Yoshiyuki
2014-08-01
The Nano-JASMINE mission has been designed to perform absolute astrometric measurements with unprecedented accuracy; the end-of-mission parallax standard error is required to be of the order of 3 milli arc seconds for stars brighter than 7.5 mag in the zw-band(0.6μm-1.0μm) .These requirements set a stringent constraint on the accuracy of the estimation of the location of the stellar image on the CCD for each observation. However each stellar images have individual shape depend on the spectral energy distribution of the star, the CCD properties, and the optics and its associated wave front errors. So it is necessity that the centroiding algorithm performs a high accuracy in any observables. Referring to the study of Gaia, we use LSF fitting method for centroiding algorithm, and investigate systematic error of the algorithm for Nano-JASMINE. Furthermore, we found to improve the algorithm by restricting sample LSF when we use a Principle Component Analysis. We show that centroiding algorithm error decrease after adapted the method.
Ambiguity Of Doppler Centroid In Synthetic-Aperture Radar
NASA Technical Reports Server (NTRS)
Chang, Chi-Yung; Curlander, John C.
1991-01-01
Paper discusses performances of two algorithms for resolution of ambiguity in estimated Doppler centroid frequency of echoes in synthetic-aperture radar. One based on range-cross-correlation technique, other based on multiple-pulse-repetition-frequency technique.
Comparison of performance of some common Hartmann-Shack centroid estimation methods
NASA Astrophysics Data System (ADS)
Thatiparthi, C.; Ommani, A.; Burman, R.; Thapa, D.; Hutchings, N.; Lakshminarayanan, V.
2016-03-01
The accuracy of the estimation of optical aberrations by measuring the distorted wave front using a Hartmann-Shack wave front sensor (HSWS) is mainly dependent upon the measurement accuracy of the centroid of the focal spot. The most commonly used methods for centroid estimation such as the brightest spot centroid; first moment centroid; weighted center of gravity and intensity weighted center of gravity, are generally applied on the entire individual sub-apertures of the lens let array. However, these processes of centroid estimation are sensitive to the influence of reflections, scattered light, and noise; especially in the case where the signal spot area is smaller compared to the whole sub-aperture area. In this paper, we give a comparison of performance of the commonly used centroiding methods on estimation of optical aberrations, with and without the use of some pre-processing steps (thresholding, Gaussian smoothing and adaptive windowing). As an example we use the aberrations of the human eye model. This is done using the raw data collected from a custom made ophthalmic aberrometer and a model eye to emulate myopic and hyper-metropic defocus values up to 2 Diopters. We show that the use of any simple centroiding algorithm is sufficient in the case of ophthalmic applications for estimating aberrations within the typical clinically acceptable limits of a quarter Diopter margins, when certain pre-processing steps to reduce the impact of external factors are used.
NASA Technical Reports Server (NTRS)
McDowell, Mark
2004-01-01
An integrated algorithm for decomposing overlapping particle images (multi-particle objects) along with determining each object s constituent particle centroid(s) has been developed using image analysis techniques. The centroid finding algorithm uses a modified eight-direction search method for finding the perimeter of any enclosed object. The centroid is calculated using the intensity-weighted center of mass of the object. The overlap decomposition algorithm further analyzes the object data and breaks it down into its constituent particle centroid(s). This is accomplished with an artificial neural network, feature based technique and provides an efficient way of decomposing overlapping particles. Combining the centroid finding and overlap decomposition routines into a single algorithm allows us to accurately predict the error associated with finding the centroid(s) of particles in our experiments. This algorithm has been tested using real, simulated, and synthetic data and the results are presented and discussed.
CCD centroiding analysis for Nano-JASMINE observation data
NASA Astrophysics Data System (ADS)
Niwa, Yoshito; Yano, Taihei; Araki, Hiroshi; Gouda, Naoteru; Kobayashi, Yukiyasu; Yamada, Yoshiyuki; Tazawa, Seiichi; Hanada, Hideo
2010-07-01
Nano-JASMINE is a very small satellite mission for global space astrometry with milli-arcsecond accuracy, which will be launched in 2011. In this mission, centroids of stars in CCD image frames are estimated with sub-pixel accuracy. In order to realize such a high precision centroiding an algorithm utilizing a least square method is employed. One of the advantages is that centroids can be calculated without explicit assumption of the point spread functions of stars. CCD centroiding experiment has been performed to investigate whether this data analysis is available, and centroids of artificial star images on a CCD are determined with a precision of less than 0.001 pixel. This result indicates parallaxes of stars within 300 pc from Sun can be observed in Nano-JASMINE.
Photometric analysis in the Kepler Science Operations Center pipeline
NASA Astrophysics Data System (ADS)
Twicken, Joseph D.; Clarke, Bruce D.; Bryson, Stephen T.; Tenenbaum, Peter; Wu, Hayley; Jenkins, Jon M.; Girouard, Forrest; Klaus, Todd C.
2010-07-01
We describe the Photometric Analysis (PA) software component and its context in the Kepler Science Operations Center (SOC) Science Processing Pipeline. The primary tasks of this module are to compute the photometric flux and photocenters (centroids) for over 160,000 long cadence (~thirty minute) and 512 short cadence (~one minute) stellar targets from the calibrated pixels in their respective apertures. We discuss science algorithms for long and short cadence PA: cosmic ray cleaning; background estimation and removal; aperture photometry; and flux-weighted centroiding. We discuss the end-to-end propagation of uncertainties for the science algorithms. Finally, we present examples of photometric apertures, raw flux light curves, and centroid time series from Kepler flight data. PA light curves, centroid time series, and barycentric timestamp corrections are exported to the Multi-mission Archive at Space Telescope [Science Institute] (MAST) and are made available to the general public in accordance with the NASA/Kepler data release policy.
Photometric Analysis in the Kepler Science Operations Center Pipeline
NASA Technical Reports Server (NTRS)
Twicken, Joseph D.; Clarke, Bruce D.; Bryson, Stephen T.; Tenenbaum, Peter; Wu, Hayley; Jenkins, Jon M.; Girouard, Forrest; Klaus, Todd C.
2010-01-01
We describe the Photometric Analysis (PA) software component and its context in the Kepler Science Operations Center (SOC) pipeline. The primary tasks of this module are to compute the photometric flux and photocenters (centroids) for over 160,000 long cadence (thirty minute) and 512 short cadence (one minute) stellar targets from the calibrated pixels in their respective apertures. We discuss the science algorithms for long and short cadence PA: cosmic ray cleaning; background estimation and removal; aperture photometry; and flux-weighted centroiding. We discuss the end-to-end propagation of uncertainties for the science algorithms. Finally, we present examples of photometric apertures, raw flux light curves, and centroid time series from Kepler flight data. PA light curves, centroid time series, and barycentric timestamp corrections are exported to the Multi-mission Archive at Space Telescope [Science Institute] (MAST) and are made available to the general public in accordance with the NASA/Kepler data release policy.
Ellipsoids for anomaly detection in remote sensing imagery
NASA Astrophysics Data System (ADS)
Grosklos, Guenchik; Theiler, James
2015-05-01
For many target and anomaly detection algorithms, a key step is the estimation of a centroid (relatively easy) and a covariance matrix (somewhat harder) that characterize the background clutter. For a background that can be modeled as a multivariate Gaussian, the centroid and covariance lead to an explicit probability density function that can be used in likelihood ratio tests for optimal detection statistics. But ellipsoidal contours can characterize a much larger class of multivariate density function, and the ellipsoids that characterize the outer periphery of the distribution are most appropriate for detection in the low false alarm rate regime. Traditionally the sample mean and sample covariance are used to estimate ellipsoid location and shape, but these quantities are confounded both by large lever-arm outliers and non-Gaussian distributions within the ellipsoid of interest. This paper compares a variety of centroid and covariance estimation schemes with the aim of characterizing the periphery of the background distribution. In particular, we will consider a robust variant of the Khachiyan algorithm for minimum-volume enclosing ellipsoid. The performance of these different approaches is evaluated on multispectral and hyperspectral remote sensing imagery using coverage plots of ellipsoid volume versus false alarm rate.
NASA Astrophysics Data System (ADS)
Adya Zizwan, Putra; Zarlis, Muhammad; Budhiarti Nababan, Erna
2017-12-01
The determination of Centroid on K-Means Algorithm directly affects the quality of the clustering results. Determination of centroid by using random numbers has many weaknesses. The GenClust algorithm that combines the use of Genetic Algorithms and K-Means uses a genetic algorithm to determine the centroid of each cluster. The use of the GenClust algorithm uses 50% chromosomes obtained through deterministic calculations and 50% is obtained from the generation of random numbers. This study will modify the use of the GenClust algorithm in which the chromosomes used are 100% obtained through deterministic calculations. The results of this study resulted in performance comparisons expressed in Mean Square Error influenced by centroid determination on K-Means method by using GenClust method, modified GenClust method and also classic K-Means.
Automated quasi-3D spine curvature quantification and classification
NASA Astrophysics Data System (ADS)
Khilari, Rupal; Puchin, Juris; Okada, Kazunori
2018-02-01
Scoliosis is a highly prevalent spine deformity that has traditionally been diagnosed through measurement of the Cobb angle on radiographs. More recent technology such as the commercial EOS imaging system, although more accurate, also require manual intervention for selecting the extremes of the vertebrae forming the Cobb angle. This results in a high degree of inter and intra observer error in determining the extent of spine deformity. Our primary focus is to eliminate the need for manual intervention by robustly quantifying the curvature of the spine in three dimensions, making it consistent across multiple observers. Given the vertebrae centroids, the proposed Vertebrae Sequence Angle (VSA) estimation and segmentation algorithm finds the largest angle between consecutive pairs of centroids within multiple inflection points on the curve. To exploit existing clinical diagnostic standards, the algorithm uses a quasi-3-dimensional approach considering the curvature in the coronal and sagittal projection planes of the spine. Experiments were performed with manuallyannotated ground-truth classification of publicly available, centroid-annotated CT spine datasets. This was compared with the results obtained from manual Cobb and Centroid angle estimation methods. Using the VSA, we then automatically classify the occurrence and the severity of spine curvature based on Lenke's classification for idiopathic scoliosis. We observe that the results appear promising with a scoliotic angle lying within +/- 9° of the Cobb and Centroid angle, and vertebrae positions differing by at the most one position. Our system also resulted in perfect classification of scoliotic from healthy spines with our dataset with six cases.
Angles-centroids fitting calibration and the centroid algorithm applied to reverse Hartmann test
NASA Astrophysics Data System (ADS)
Zhao, Zhu; Hui, Mei; Xia, Zhengzheng; Dong, Liquan; Liu, Ming; Liu, Xiaohua; Kong, Lingqin; Zhao, Yuejin
2017-02-01
In this paper, we develop an angles-centroids fitting (ACF) system and the centroid algorithm to calibrate the reverse Hartmann test (RHT) with sufficient precision. The essence of ACF calibration is to establish the relationship between ray angles and detector coordinates. Centroids computation is used to find correspondences between the rays of datum marks and detector pixels. Here, the point spread function of RHT is classified as circle of confusion (CoC), and the fitting of a CoC spot with 2D Gaussian profile to identify the centroid forms the basis of the centroid algorithm. Theoretical and experimental results of centroids computation demonstrate that the Gaussian fitting method has a less centroid shift or the shift grows at a slower pace when the quality of the image is reduced. In ACF tests, the optical instrumental alignments reach an overall accuracy of 0.1 pixel with the application of laser spot centroids tracking program. Locating the crystal at different positions, the feasibility and accuracy of ACF calibration are further validated to 10-6-10-4 rad root-mean-square error of the calibrations differences.
NASA Technical Reports Server (NTRS)
Chang, C. Y.; Curlander, J. C.
1992-01-01
Estimation of the Doppler centroid ambiguity is a necessary element of the signal processing for SAR systems with large antenna pointing errors. Without proper resolution of the Doppler centroid estimation (DCE) ambiguity, the image quality will be degraded in the system impulse response function and the geometric fidelity. Two techniques for resolution of DCE ambiguity for the spaceborne SAR are presented; they include a brief review of the range cross-correlation technique and presentation of a new technique using multiple pulse repetition frequencies (PRFs). For SAR systems, where other performance factors control selection of the PRF's, an algorithm is devised to resolve the ambiguity that uses PRF's of arbitrary numerical values. The performance of this multiple PRF technique is analyzed based on a statistical error model. An example is presented that demonstrates for the Shuttle Imaging Radar-C (SIR-C) C-band SAR, the probability of correct ambiguity resolution is higher than 95 percent for antenna attitude errors as large as 3 deg.
NASA Technical Reports Server (NTRS)
Antreasian, Peter G.
1988-01-01
Two orbit simulations, one representing the actual Geopotential Research Mission (GRM) orbit and the other representing the orbit estimated from orbit determination techniques, are presented. A computer algorithm was created to simulate GRM's drag compensation mechanism so the fuel expenditure and proof mass trajectories relative to the spacecraft centroid could be calculated for the mission. The results of the GRM DISCOS simulation demonstrated that the spacecraft can essentially be drag-free. The results showed that the centroid of the spacecraft can be controlled so that it will not deviate more than 1.0 mm in any direction from the centroid of the proof mass.
Shack-Hartmann wavefront sensor with large dynamic range.
Xia, Mingliang; Li, Chao; Hu, Lifa; Cao, Zhaoliang; Mu, Quanquan; Xuan, Li
2010-01-01
A new spot centroid detection algorithm for a Shack-Hartmann wavefront sensor (SHWFS) is experimentally investigated. The algorithm is a kind of dynamic tracking algorithm that tracks and calculates the corresponding spot centroid of the current spot map based on the spot centroid of the previous spot map, according to the strong correlation of the wavefront slope and the centroid of the corresponding spot between temporally adjacent SHWFS measurements. That is, for adjacent measurements, the spot centroid movement will usually fall within some range. Using the algorithm, the dynamic range of an SHWFS can be expanded by a factor of three in the measurement of tilt aberration compared with the conventional algorithm, more than 1.3 times in the measurement of defocus aberration, and more than 2 times in the measurement of the mixture of spherical aberration plus coma aberration. The algorithm is applied in our SHWFS to measure the distorted wavefront of the human eye. The experimental results of the adaptive optics (AO) system for retina imaging are presented to prove its feasibility for highly aberrated eyes.
An Accurate Centroiding Algorithm for PSF Reconstruction
NASA Astrophysics Data System (ADS)
Lu, Tianhuan; Luo, Wentao; Zhang, Jun; Zhang, Jiajun; Li, Hekun; Dong, Fuyu; Li, Yingke; Liu, Dezi; Fu, Liping; Li, Guoliang; Fan, Zuhui
2018-07-01
In this work, we present a novel centroiding method based on Fourier space Phase Fitting (FPF) for Point Spread Function (PSF) reconstruction. We generate two sets of simulations to test our method. The first set is generated by GalSim with an elliptical Moffat profile and strong anisotropy that shifts the center of the PSF. The second set of simulations is drawn from CFHT i band stellar imaging data. We find non-negligible anisotropy from CFHT stellar images, which leads to ∼0.08 scatter in units of pixels using a polynomial fitting method (Vakili & Hogg). When we apply the FPF method to estimate the centroid in real space, the scatter reduces to ∼0.04 in S/N = 200 CFHT-like sample. In low signal-to-noise ratio (S/N; 50 and 100) CFHT-like samples, the background noise dominates the shifting of the centroid; therefore, the scatter estimated from different methods is similar. We compare polynomial fitting and FPF using GalSim simulation with optical anisotropy. We find that in all S/N (50, 100, and 200) samples, FPF performs better than polynomial fitting by a factor of ∼3. In general, we suggest that in real observations there exists anisotropy that shifts the centroid, and thus, the FPF method provides a better way to accurately locate it.
CCD centroiding experiment for JASMINE and ILOM
NASA Astrophysics Data System (ADS)
Yano, Taihei; Araki, Hiroshi; Gouda, Naoteru; Kobayashi, Yukiyasu; Tsujimoto, Takuji; Nakajima, Tadashi; Kawano, Nobuyuki; Tazawa, Seiichi; Yamada, Yoshiyuki; Hanada, Hideo; Asari, Kazuyoshi; Tsuruta, Seiitsu
2006-06-01
JASMINE and ILOM are space missions which are in progress at the National Astronomical Observatory of Japan. These two projects need a common astrometric technique to obtain precise positions of star images on solid state detectors to accomplish the objectives. We have carried out measurements of centroid of artificial star images on a CCD to investigate the accuracy of the positions of the stars, using an algorithm for estimating them from photon weighted means of the stars. We find that the accuracy of the star positions reaches 1/300 pixel for one measurement. We also measure positions of stars, using an algorithm for correcting the distorted optical image. Finally, we find that the accuracy of the measurement for the positions of the stars from the strongly distorted image is under 1/150 pixel for one measurement.
Centroids evaluation of the images obtained with the conical null-screen corneal topographer
NASA Astrophysics Data System (ADS)
Osorio-Infante, Arturo I.; Armengol-Cruz, Victor de Emanuel; Campos-García, Manuel; Cossio-Guerrero, Cesar; Marquez-Flores, Jorge; Díaz-Uribe, José Rufino
2016-09-01
In this work, we propose some algorithms to recover the centroids of the resultant image obtained by a conical nullscreen based corneal topographer. With these algorithms, we obtain the region of interest (roi) of the original image and using an image-processing algorithm, we calculate the geometric centroid of each roi. In order to improve our algorithm performance, we use different settings of null-screen targets, changing their size and number. We also improved the illumination system to avoid inhomogeneous zones in the corneal images. Finally, we report some corneal topographic measurements with the best setting we found.
Yin, Xiaoming; Li, Xiang; Zhao, Liping; Fang, Zhongping
2009-11-10
A Shack-Hartmann wavefront sensor (SWHS) splits the incident wavefront into many subsections and transfers the distorted wavefront detection into the centroid measurement. The accuracy of the centroid measurement determines the accuracy of the SWHS. Many methods have been presented to improve the accuracy of the wavefront centroid measurement. However, most of these methods are discussed from the point of view of optics, based on the assumption that the spot intensity of the SHWS has a Gaussian distribution, which is not applicable to the digital SHWS. In this paper, we present a centroid measurement algorithm based on the adaptive thresholding and dynamic windowing method by utilizing image processing techniques for practical application of the digital SHWS in surface profile measurement. The method can detect the centroid of each focal spot precisely and robustly by eliminating the influence of various noises, such as diffraction of the digital SHWS, unevenness and instability of the light source, as well as deviation between the centroid of the focal spot and the center of the detection area. The experimental results demonstrate that the algorithm has better precision, repeatability, and stability compared with other commonly used centroid methods, such as the statistical averaging, thresholding, and windowing algorithms.
Bias in error estimation when using cross-validation for model selection.
Varma, Sudhir; Simon, Richard
2006-02-23
Cross-validation (CV) is an effective method for estimating the prediction error of a classifier. Some recent articles have proposed methods for optimizing classifiers by choosing classifier parameter values that minimize the CV error estimate. We have evaluated the validity of using the CV error estimate of the optimized classifier as an estimate of the true error expected on independent data. We used CV to optimize the classification parameters for two kinds of classifiers; Shrunken Centroids and Support Vector Machines (SVM). Random training datasets were created, with no difference in the distribution of the features between the two classes. Using these "null" datasets, we selected classifier parameter values that minimized the CV error estimate. 10-fold CV was used for Shrunken Centroids while Leave-One-Out-CV (LOOCV) was used for the SVM. Independent test data was created to estimate the true error. With "null" and "non null" (with differential expression between the classes) data, we also tested a nested CV procedure, where an inner CV loop is used to perform the tuning of the parameters while an outer CV is used to compute an estimate of the error. The CV error estimate for the classifier with the optimal parameters was found to be a substantially biased estimate of the true error that the classifier would incur on independent data. Even though there is no real difference between the two classes for the "null" datasets, the CV error estimate for the Shrunken Centroid with the optimal parameters was less than 30% on 18.5% of simulated training data-sets. For SVM with optimal parameters the estimated error rate was less than 30% on 38% of "null" data-sets. Performance of the optimized classifiers on the independent test set was no better than chance. The nested CV procedure reduces the bias considerably and gives an estimate of the error that is very close to that obtained on the independent testing set for both Shrunken Centroids and SVM classifiers for "null" and "non-null" data distributions. We show that using CV to compute an error estimate for a classifier that has itself been tuned using CV gives a significantly biased estimate of the true error. Proper use of CV for estimating true error of a classifier developed using a well defined algorithm requires that all steps of the algorithm, including classifier parameter tuning, be repeated in each CV loop. A nested CV procedure provides an almost unbiased estimate of the true error.
Research of centroiding algorithms for extended and elongated spot of sodium laser guide star
NASA Astrophysics Data System (ADS)
Shao, Yayun; Zhang, Yudong; Wei, Kai
2016-10-01
Laser guide stars (LGSs) increase the sky coverage of astronomical adaptive optics systems. But spot array obtained by Shack-Hartmann wave front sensors (WFSs) turns extended and elongated, due to the thickness and size limitation of sodium LGS, which affects the accuracy of the wave front reconstruction algorithm. In this paper, we compared three different centroiding algorithms , the Center-of-Gravity (CoG), weighted CoG (WCoG) and Intensity Weighted Centroid (IWC), as well as those accuracies for various extended and elongated spots. In addition, we compared the reconstructed image data from those three algorithms with theoretical results, and proved that WCoG and IWC are the best wave front reconstruction algorithms for extended and elongated spot among all the algorithms.
Analysis of k-means clustering approach on the breast cancer Wisconsin dataset.
Dubey, Ashutosh Kumar; Gupta, Umesh; Jain, Sonal
2016-11-01
Breast cancer is one of the most common cancers found worldwide and most frequently found in women. An early detection of breast cancer provides the possibility of its cure; therefore, a large number of studies are currently going on to identify methods that can detect breast cancer in its early stages. This study was aimed to find the effects of k-means clustering algorithm with different computation measures like centroid, distance, split method, epoch, attribute, and iteration and to carefully consider and identify the combination of measures that has potential of highly accurate clustering accuracy. K-means algorithm was used to evaluate the impact of clustering using centroid initialization, distance measures, and split methods. The experiments were performed using breast cancer Wisconsin (BCW) diagnostic dataset. Foggy and random centroids were used for the centroid initialization. In foggy centroid, based on random values, the first centroid was calculated. For random centroid, the initial centroid was considered as (0, 0). The results were obtained by employing k-means algorithm and are discussed with different cases considering variable parameters. The calculations were based on the centroid (foggy/random), distance (Euclidean/Manhattan/Pearson), split (simple/variance), threshold (constant epoch/same centroid), attribute (2-9), and iteration (4-10). Approximately, 92 % average positive prediction accuracy was obtained with this approach. Better results were found for the same centroid and the highest variance. The results achieved using Euclidean and Manhattan were better than the Pearson correlation. The findings of this work provided extensive understanding of the computational parameters that can be used with k-means. The results indicated that k-means has a potential to classify BCW dataset.
Improvement in error propagation in the Shack-Hartmann-type zonal wavefront sensors.
Pathak, Biswajit; Boruah, Bosanta R
2017-12-01
Estimation of the wavefront from measured slope values is an essential step in a Shack-Hartmann-type wavefront sensor. Using an appropriate estimation algorithm, these measured slopes are converted into wavefront phase values. Hence, accuracy in wavefront estimation lies in proper interpretation of these measured slope values using the chosen estimation algorithm. There are two important sources of errors associated with the wavefront estimation process, namely, the slope measurement error and the algorithm discretization error. The former type is due to the noise in the slope measurements or to the detector centroiding error, and the latter is a consequence of solving equations of a basic estimation algorithm adopted onto a discrete geometry. These errors deserve particular attention, because they decide the preference of a specific estimation algorithm for wavefront estimation. In this paper, we investigate these two important sources of errors associated with the wavefront estimation algorithms of Shack-Hartmann-type wavefront sensors. We consider the widely used Southwell algorithm and the recently proposed Pathak-Boruah algorithm [J. Opt.16, 055403 (2014)JOOPDB0150-536X10.1088/2040-8978/16/5/055403] and perform a comparative study between the two. We find that the latter algorithm is inherently superior to the Southwell algorithm in terms of the error propagation performance. We also conduct experiments that further establish the correctness of the comparative study between the said two estimation algorithms.
Canovas, Carmen; Alarcon, Aixa; Rosén, Robert; Kasthurirangan, Sanjeev; Ma, Joseph J K; Koch, Douglas D; Piers, Patricia
2018-02-01
To assess the accuracy of toric intraocular lens (IOL) power calculations of a new algorithm that incorporates the effect of posterior corneal astigmatism (PCA). Abbott Medical Optics, Inc., Groningen, the Netherlands. Retrospective case report. In eyes implanted with toric IOLs, the exact vergence formula of the Tecnis toric calculator was used to predict refractive astigmatism from preoperative biometry, surgeon-estimated surgically induced astigmatism (SIA), and implanted IOL power, with and without including the new PCA algorithm. For each calculation method, the error in predicted refractive astigmatism was calculated as the vector difference between the prediction and the actual refraction. Calculations were also made using postoperative keratometry (K) values to eliminate the potential effect of incorrect SIA estimates. The study comprised 274 eyes. The PCA algorithm significantly reduced the centroid error in predicted refractive astigmatism (P < .001). With the PCA algorithm, the centroid error reduced from 0.50 @ 1 to 0.19 @ 3 when using preoperative K values and from 0.30 @ 0 to 0.02 @ 84 when using postoperative K values. Patients who had anterior corneal against-the-rule, with-the-rule, and oblique astigmatism had improvement with the PCA algorithm. In addition, the PCA algorithm reduced the median absolute error in all groups (P < .001). The use of the new PCA algorithm decreased the error in the prediction of residual refractive astigmatism in eyes implanted with toric IOLs. Therefore, the new PCA algorithm, in combination with an exact vergence IOL power calculation formula, led to an increased predictability of toric IOL power. Copyright © 2018 ASCRS and ESCRS. Published by Elsevier Inc. All rights reserved.
Alleyne, Colin J; Kirk, Andrew G; Chien, Wei-Yin; Charette, Paul G
2008-11-24
An eigenvector analysis based algorithm is presented for estimating refractive index changes from 2-D reflectance/dispersion images obtained with spectro-angular surface plasmon resonance systems. High resolution over a large dynamic range can be achieved simultaneously. The method performs well in simulations with noisy data maintaining an error of less than 10(-8) refractive index units with up to six bits of noise on 16 bit quantized image data. Experimental measurements show that the method results in a much higher signal to noise ratio than the standard 1-D weighted centroid dip finding algorithm.
Trajectory data privacy protection based on differential privacy mechanism
NASA Astrophysics Data System (ADS)
Gu, Ke; Yang, Lihao; Liu, Yongzhi; Liao, Niandong
2018-05-01
In this paper, we propose a trajectory data privacy protection scheme based on differential privacy mechanism. In the proposed scheme, the algorithm first selects the protected points from the user’s trajectory data; secondly, the algorithm forms the polygon according to the protected points and the adjacent and high frequent accessed points that are selected from the accessing point database, then the algorithm calculates the polygon centroids; finally, the noises are added to the polygon centroids by the differential privacy method, and the polygon centroids replace the protected points, and then the algorithm constructs and issues the new trajectory data. The experiments show that the running time of the proposed algorithms is fast, the privacy protection of the scheme is effective and the data usability of the scheme is higher.
Trilateration-based localization algorithm for ADS-B radar systems
NASA Astrophysics Data System (ADS)
Huang, Ming-Shih
Rapidly increasing growth and demand in various unmanned aerial vehicles (UAV) have pushed governmental regulation development and numerous technology research advances toward integrating unmanned and manned aircraft into the same civil airspace. Safety of other airspace users is the primary concern; thus, with the introduction of UAV into the National Airspace System (NAS), a key issue to overcome is the risk of a collision with manned aircraft. The challenge of UAV integration is global. As automatic dependent surveillance-broadcast (ADS-B) system has gained wide acceptance, additional exploitations of the radioed satellite-based information are topics of current interest. One such opportunity includes the augmentation of the communication ADS-B signal with a random bi-phase modulation for concurrent use as a radar signal for detecting other aircraft in the vicinity. This dissertation provides detailed discussion about the ADS-B radar system, as well as the formulation and analysis of a suitable non-cooperative multi-target tracking method for the ADS-B radar system using radar ranging techniques and particle filter algorithms. In order to deal with specific challenges faced by the ADS-B radar system, several estimation algorithms are studied. Trilateration-based localization algorithms are proposed due to their easy implementation and their ability to work with coherent signal sources. The centroid of three most closely spaced intersections of constant-range loci is conventionally used as trilateration estimate without rigorous justification. In this dissertation, we address the quality of trilateration intersections through range scaling factors. A number of well-known triangle centers, including centroid, incenter, Lemoine point (LP), and Fermat point (FP), are discussed in detail. To the author's best knowledge, LP was never associated with trilateration techniques. According our study, LP is proposed as the best trilateration estimator thanks to the desirable property that the total distance to three triangle edges is minimized. It is demonstrated through simulation that LP outperforms centroid localization without additional computational load. In addition, severe trilateration scenarios such as two-intersection cases are considered in this dissertation, and enhanced trilateration algorithms are proposed. Particle filter (PF) is also discussed in this dissertation, and a simplified resampling mechanism is proposed. In addition, the low-update-rate measurement due to the ADS-B system specification is addressed in order to provide acceptable estimation results. Supplementary particle filter (SPF) is proposed to takes advantage of the waiting time before the next measurement is available and improves the estimation convergence rate and estimation accuracy. While PF suffers from sample impoverishment, especially when the number of particles is not sufficiently large, SPF allows the particles to redistribute to high likelihood areas over iterations using the same measurement information, thereby improving the estimation performance.
Algorithms for High-Speed Noninvasive Eye-Tracking System
NASA Technical Reports Server (NTRS)
Talukder, Ashit; Morookian, John-Michael; Lambert, James
2010-01-01
Two image-data-processing algorithms are essential to the successful operation of a system of electronic hardware and software that noninvasively tracks the direction of a person s gaze in real time. The system was described in High-Speed Noninvasive Eye-Tracking System (NPO-30700) NASA Tech Briefs, Vol. 31, No. 8 (August 2007), page 51. To recapitulate from the cited article: Like prior commercial noninvasive eyetracking systems, this system is based on (1) illumination of an eye by a low-power infrared light-emitting diode (LED); (2) acquisition of video images of the pupil, iris, and cornea in the reflected infrared light; (3) digitization of the images; and (4) processing the digital image data to determine the direction of gaze from the centroids of the pupil and cornea in the images. Most of the prior commercial noninvasive eyetracking systems rely on standard video cameras, which operate at frame rates of about 30 Hz. Such systems are limited to slow, full-frame operation. The video camera in the present system includes a charge-coupled-device (CCD) image detector plus electronic circuitry capable of implementing an advanced control scheme that effects readout from a small region of interest (ROI), or subwindow, of the full image. Inasmuch as the image features of interest (the cornea and pupil) typically occupy a small part of the camera frame, this ROI capability can be exploited to determine the direction of gaze at a high frame rate by reading out from the ROI that contains the cornea and pupil (but not from the rest of the image) repeatedly. One of the present algorithms exploits the ROI capability. The algorithm takes horizontal row slices and takes advantage of the symmetry of the pupil and cornea circles and of the gray-scale contrasts of the pupil and cornea with respect to other parts of the eye. The algorithm determines which horizontal image slices contain the pupil and cornea, and, on each valid slice, the end coordinates of the pupil and cornea. Information from multiple slices is then combined to robustly locate the centroids of the pupil and cornea images. The other of the two present algorithms is a modified version of an older algorithm for estimating the direction of gaze from the centroids of the pupil and cornea. The modification lies in the use of the coordinates of the centroids, rather than differences between the coordinates of the centroids, in a gaze-mapping equation. The equation locates a gaze point, defined as the intersection of the gaze axis with a surface of interest, which is typically a computer display screen (see figure). The expected advantage of the modification is to make the gaze computation less dependent on some simplifying assumptions that are sometimes not accurate
Accuracy of tree diameter estimation from terrestrial laser scanning by circle-fitting methods
NASA Astrophysics Data System (ADS)
Koreň, Milan; Mokroš, Martin; Bucha, Tomáš
2017-12-01
This study compares the accuracies of diameter at breast height (DBH) estimations by three initial (minimum bounding box, centroid, and maximum distance) and two refining (Monte Carlo and optimal circle) circle-fitting methods The circle-fitting algorithms were evaluated in multi-scan mode and a simulated single-scan mode on 157 European beech trees (Fagus sylvatica L.). DBH measured by a calliper was used as reference data. Most of the studied circle-fitting algorithms significantly underestimated the mean DBH in both scanning modes. Only the Monte Carlo method in the single-scan mode significantly overestimated the mean DBH. The centroid method proved to be the least suitable and showed significantly different results from the other circle-fitting methods in both scanning modes. In multi-scan mode, the accuracy of the minimum bounding box method was not significantly different from the accuracies of the refining methods The accuracy of the maximum distance method was significantly different from the accuracies of the refining methods in both scanning modes. The accuracy of the Monte Carlo method was significantly different from the accuracy of the optimal circle method in only single-scan mode. The optimal circle method proved to be the most accurate circle-fitting method for DBH estimation from point clouds in both scanning modes.
An adaptive tracker for ShipIR/NTCS
NASA Astrophysics Data System (ADS)
Ramaswamy, Srinivasan; Vaitekunas, David A.
2015-05-01
A key component in any image-based tracking system is the adaptive tracking algorithm used to segment the image into potential targets, rank-and-select the best candidate target, and the gating of the selected target to further improve tracker performance. This paper will describe a new adaptive tracker algorithm added to the naval threat countermeasure simulator (NTCS) of the NATO-standard ship signature model (ShipIR). The new adaptive tracking algorithm is an optional feature used with any of the existing internal NTCS or user-defined seeker algorithms (e.g., binary centroid, intensity centroid, and threshold intensity centroid). The algorithm segments the detected pixels into clusters, and the smallest set of clusters that meet the detection criterion is obtained by using a knapsack algorithm to identify the set of clusters that should not be used. The rectangular area containing the chosen clusters defines an inner boundary, from which a weighted centroid is calculated as the aim-point. A track-gate is then positioned around the clusters, taking into account the rate of change of the bounding area and compensating for any gimbal displacement. A sequence of scenarios is used to test the new tracking algorithm on a generic unclassified DDG ShipIR model, with and without flares, and demonstrate how some of the key seeker signals are impacted by both the ship and flare intrinsic signatures.
Sun, Lifan; Ji, Baofeng; Lan, Jian; He, Zishu; Pu, Jiexin
2017-01-01
The key to successful maneuvering complex extended object tracking (MCEOT) using range extent measurements provided by high resolution sensors lies in accurate and effective modeling of both the extension dynamics and the centroid kinematics. During object maneuvers, the extension dynamics of an object with a complex shape is highly coupled with the centroid kinematics. However, this difficult but important problem is rarely considered and solved explicitly. In view of this, this paper proposes a general approach to modeling a maneuvering complex extended object based on Minkowski sum, so that the coupled turn maneuvers in both the centroid states and extensions can be described accurately. The new model has a concise and unified form, in which the complex extension dynamics can be simply and jointly characterized by multiple simple sub-objects’ extension dynamics based on Minkowski sum. The proposed maneuvering model fits range extent measurements very well due to its favorable properties. Based on this model, an MCEOT algorithm dealing with motion and extension maneuvers is also derived. Two different cases of the turn maneuvers with known/unknown turn rates are specifically considered. The proposed algorithm which jointly estimates the kinematic state and the object extension can also be easily implemented. Simulation results demonstrate the effectiveness of the proposed modeling and tracking approaches. PMID:28937629
NASA Technical Reports Server (NTRS)
Peterson, Harold; Koshak, William J.
2009-01-01
An algorithm has been developed to estimate the altitude distribution of one-meter lightning channel segments. The algorithm is required as part of a broader objective that involves improving the lightning NOx emission inventories of both regional air quality and global chemistry/climate models. The algorithm was tested and applied to VHF signals detected by the North Alabama Lightning Mapping Array (NALMA). The accuracy of the algorithm was characterized by comparing algorithm output to the plots of individual discharges whose lengths were computed by hand; VHF source amplitude thresholding and smoothing were applied to optimize results. Several thousands of lightning flashes within 120 km of the NALMA network centroid were gathered from all four seasons, and were analyzed by the algorithm. The mean, standard deviation, and median statistics were obtained for all the flashes, the ground flashes, and the cloud flashes. One-meter channel segment altitude distributions were also obtained for the different seasons.
Tan, Li Kuo; Liew, Yih Miin; Lim, Einly; Abdul Aziz, Yang Faridah; Chee, Kok Han; McLaughlin, Robert A
2018-06-01
In this paper, we develop and validate an open source, fully automatic algorithm to localize the left ventricular (LV) blood pool centroid in short axis cardiac cine MR images, enabling follow-on automated LV segmentation algorithms. The algorithm comprises four steps: (i) quantify motion to determine an initial region of interest surrounding the heart, (ii) identify potential 2D objects of interest using an intensity-based segmentation, (iii) assess contraction/expansion, circularity, and proximity to lung tissue to score all objects of interest in terms of their likelihood of constituting part of the LV, and (iv) aggregate the objects into connected groups and construct the final LV blood pool volume and centroid. This algorithm was tested against 1140 datasets from the Kaggle Second Annual Data Science Bowl, as well as 45 datasets from the STACOM 2009 Cardiac MR Left Ventricle Segmentation Challenge. Correct LV localization was confirmed in 97.3% of the datasets. The mean absolute error between the gold standard and localization centroids was 2.8 to 4.7 mm, or 12 to 22% of the average endocardial radius. Graphical abstract Fully automated localization of the left ventricular blood pool in short axis cardiac cine MR images.
NASA Astrophysics Data System (ADS)
Cui, Jia; Hong, Bei; Jiang, Xuepeng; Chen, Qinghua
2017-05-01
With the purpose of reinforcing correlation analysis of risk assessment threat factors, a dynamic assessment method of safety risks based on particle filtering is proposed, which takes threat analysis as the core. Based on the risk assessment standards, the method selects threat indicates, applies a particle filtering algorithm to calculate influencing weight of threat indications, and confirms information system risk levels by combining with state estimation theory. In order to improve the calculating efficiency of the particle filtering algorithm, the k-means cluster algorithm is introduced to the particle filtering algorithm. By clustering all particles, the author regards centroid as the representative to operate, so as to reduce calculated amount. The empirical experience indicates that the method can embody the relation of mutual dependence and influence in risk elements reasonably. Under the circumstance of limited information, it provides the scientific basis on fabricating a risk management control strategy.
NASA Astrophysics Data System (ADS)
Sirait, Kamson; Tulus; Budhiarti Nababan, Erna
2017-12-01
Clustering methods that have high accuracy and time efficiency are necessary for the filtering process. One method that has been known and applied in clustering is K-Means Clustering. In its application, the determination of the begining value of the cluster center greatly affects the results of the K-Means algorithm. This research discusses the results of K-Means Clustering with starting centroid determination with a random and KD-Tree method. The initial determination of random centroid on the data set of 1000 student academic data to classify the potentially dropout has a sse value of 952972 for the quality variable and 232.48 for the GPA, whereas the initial centroid determination by KD-Tree has a sse value of 504302 for the quality variable and 214,37 for the GPA variable. The smaller sse values indicate that the result of K-Means Clustering with initial KD-Tree centroid selection have better accuracy than K-Means Clustering method with random initial centorid selection.
A Self-Adaptive Fuzzy c-Means Algorithm for Determining the Optimal Number of Clusters
Wang, Zhihao; Yi, Jing
2016-01-01
For the shortcoming of fuzzy c-means algorithm (FCM) needing to know the number of clusters in advance, this paper proposed a new self-adaptive method to determine the optimal number of clusters. Firstly, a density-based algorithm was put forward. The algorithm, according to the characteristics of the dataset, automatically determined the possible maximum number of clusters instead of using the empirical rule n and obtained the optimal initial cluster centroids, improving the limitation of FCM that randomly selected cluster centroids lead the convergence result to the local minimum. Secondly, this paper, by introducing a penalty function, proposed a new fuzzy clustering validity index based on fuzzy compactness and separation, which ensured that when the number of clusters verged on that of objects in the dataset, the value of clustering validity index did not monotonically decrease and was close to zero, so that the optimal number of clusters lost robustness and decision function. Then, based on these studies, a self-adaptive FCM algorithm was put forward to estimate the optimal number of clusters by the iterative trial-and-error process. At last, experiments were done on the UCI, KDD Cup 1999, and synthetic datasets, which showed that the method not only effectively determined the optimal number of clusters, but also reduced the iteration of FCM with the stable clustering result. PMID:28042291
Vital sign sensing method based on EMD in terahertz band
NASA Astrophysics Data System (ADS)
Xu, Zhengwu; Liu, Tong
2014-12-01
Non-contact respiration and heartbeat rates detection could be applied to find survivors trapped in the disaster or the remote monitoring of the respiration and heartbeat of a patient. This study presents an improved algorithm that extracts the respiration and heartbeat rates of humans by utilizing the terahertz radar, which further lessens the effects of noise, suppresses the cross-term, and enhances the detection accuracy. A human target echo model for the terahertz radar is first presented. Combining the over-sampling method, low-pass filter, and Empirical Mode Decomposition improves the signal-to-noise ratio. The smoothed pseudo Wigner-Ville distribution time-frequency technique and the centroid of the spectrogram are used to estimate the instantaneous velocity of the target's cardiopulmonary motion. The down-sampling method is adopted to prevent serious distortion. Finally, a second time-frequency analysis is applied to the centroid curve to extract the respiration and heartbeat rates of the individual. Simulation results show that compared with the previously presented vital sign sensing method, the improved algorithm enhances the signal-to-noise ratio to 1 dB with a detection accuracy of 80%. The improved algorithm is an effective approach for the detection of respiration and heartbeat signal in a complicated environment.
Centroid tracker and aimpoint selection
NASA Astrophysics Data System (ADS)
Venkateswarlu, Ronda; Sujata, K. V.; Venkateswara Rao, B.
1992-11-01
Autonomous fire and forget weapons have gained importance to achieve accurate first pass kill by hitting the target at an appropriate aim point. Centroid of the image presented by a target in the field of view (FOV) of a sensor is generally accepted as the aimpoint for these weapons. Centroid trackers are applicable only when the target image is of significant size in the FOV of the sensor but does not overflow the FOV. But as the range between the sensor and the target decreases the image of the target will grow and finally overflow the FOV at close ranges and the centroid point on the target will keep on changing which is not desirable. And also centroid need not be the most desired/vulnerable point on the target. For hardened targets like tanks, proper aimpoint selection and guidance up to almost zero range is essential to achieve maximum kill probability. This paper presents a centroid tracker realization. As centroid offers a stable tracking point, it can be used as a reference to select the proper aimpoint. The centroid and the desired aimpoint are simultaneously tracked to avoid jamming by flares and also to take care of the problems arising due to image overflow. Thresholding of gray level image to binary image is a crucial step in centroid tracker. Different thresholding algorithms are discussed and a suitable algorithm is chosen. The real-time hardware implementation of centroid tracker with a suitable thresholding technique is presented including the interfacing to a multimode tracker for autonomous target tracking and aimpoint selection. The hardware uses very high speed arithmetic and programmable logic devices to meet the speed requirement and a microprocessor based subsystem for the system control. The tracker has been evaluated in a field environment.
Weather, Climate, and Society: New Demands on Science and Services
NASA Technical Reports Server (NTRS)
2010-01-01
A new algorithm has been constructed to estimate the path length of lightning channels for the purpose of improving the model predictions of lightning NOx in both regional air quality and global chemistry/climate models. This algorithm was tested and applied to VHF signals detected by the North Alabama Lightning Mapping Array (NALMA). The accuracy of the algorithm was characterized by comparing algorithm output to the plots of individual discharges whose lengths were computed by hand. Several thousands of lightning flashes within 120 km of the NALMA network centroid were gathered from all four seasons, and were analyzed by the algorithm. The mean, standard deviation, and median statistics were obtained for all the flashes, the ground flashes, and the cloud flashes. Channel length distributions were also obtained for the different seasons.
Reducing Earth Topography Resolution for SMAP Mission Ground Tracks Using K-Means Clustering
NASA Technical Reports Server (NTRS)
Rizvi, Farheen
2013-01-01
The K-means clustering algorithm is used to reduce Earth topography resolution for the SMAP mission ground tracks. As SMAP propagates in orbit, knowledge of the radar antenna footprints on Earth is required for the antenna misalignment calibration. Each antenna footprint contains a latitude and longitude location pair on the Earth surface. There are 400 pairs in one data set for the calibration model. It is computationally expensive to calculate corresponding Earth elevation for these data pairs. Thus, the antenna footprint resolution is reduced. Similar topographical data pairs are grouped together with the K-means clustering algorithm. The resolution is reduced to the mean of each topographical cluster called the cluster centroid. The corresponding Earth elevation for each cluster centroid is assigned to the entire group. Results show that 400 data points are reduced to 60 while still maintaining algorithm performance and computational efficiency. In this work, sensitivity analysis is also performed to show a trade-off between algorithm performance versus computational efficiency as the number of cluster centroids and algorithm iterations are increased.
A software package for evaluating the performance of a star sensor operation
NASA Astrophysics Data System (ADS)
Sarpotdar, Mayuresh; Mathew, Joice; Sreejith, A. G.; Nirmal, K.; Ambily, S.; Prakash, Ajin; Safonova, Margarita; Murthy, Jayant
2017-02-01
We have developed a low-cost off-the-shelf component star sensor ( StarSense) for use in minisatellites and CubeSats to determine the attitude of a satellite in orbit. StarSense is an imaging camera with a limiting magnitude of 6.5, which extracts information from star patterns it records in the images. The star sensor implements a centroiding algorithm to find centroids of the stars in the image, a Geometric Voting algorithm for star pattern identification, and a QUEST algorithm for attitude quaternion calculation. Here, we describe the software package to evaluate the performance of these algorithms as a star sensor single operating system. We simulate the ideal case where sky background and instrument errors are omitted, and a more realistic case where noise and camera parameters are added to the simulated images. We evaluate such performance parameters of the algorithms as attitude accuracy, calculation time, required memory, star catalog size, sky coverage, etc., and estimate the errors introduced by each algorithm. This software package is written for use in MATLAB. The testing is parametrized for different hardware parameters, such as the focal length of the imaging setup, the field of view (FOV) of the camera, angle measurement accuracy, distortion effects, etc., and therefore, can be applied to evaluate the performance of such algorithms in any star sensor. For its hardware implementation on our StarSense, we are currently porting the codes in form of functions written in C. This is done keeping in view its easy implementation on any star sensor electronics hardware.
Nidheesh, N; Abdul Nazeer, K A; Ameer, P M
2017-12-01
Clustering algorithms with steps involving randomness usually give different results on different executions for the same dataset. This non-deterministic nature of algorithms such as the K-Means clustering algorithm limits their applicability in areas such as cancer subtype prediction using gene expression data. It is hard to sensibly compare the results of such algorithms with those of other algorithms. The non-deterministic nature of K-Means is due to its random selection of data points as initial centroids. We propose an improved, density based version of K-Means, which involves a novel and systematic method for selecting initial centroids. The key idea of the algorithm is to select data points which belong to dense regions and which are adequately separated in feature space as the initial centroids. We compared the proposed algorithm to a set of eleven widely used single clustering algorithms and a prominent ensemble clustering algorithm which is being used for cancer data classification, based on the performances on a set of datasets comprising ten cancer gene expression datasets. The proposed algorithm has shown better overall performance than the others. There is a pressing need in the Biomedical domain for simple, easy-to-use and more accurate Machine Learning tools for cancer subtype prediction. The proposed algorithm is simple, easy-to-use and gives stable results. Moreover, it provides comparatively better predictions of cancer subtypes from gene expression data. Copyright © 2017 Elsevier Ltd. All rights reserved.
Harry V., Jr. Wiant; Michael L. Spangler; John E. Baumgras
2002-01-01
Various taper systems and the centroid method were compared to unbiased volume estimates made by importance sampling for 720 hardwood trees selected throughout the state of West Virginia. Only the centroid method consistently gave volumes estimates that did not differ significantly from those made by importance sampling, although some taper equations did well for most...
Centroid estimation for a Shack-Hartmann wavefront sensor based on stream processing.
Kong, Fanpeng; Polo, Manuel Cegarra; Lambert, Andrew
2017-08-10
Using center of gravity to estimate the centroid of the spot in a Shack-Hartmann wavefront sensor, the measurement corrupts with photon and detector noise. Parameters, like window size, often require careful optimization to balance the noise error, dynamic range, and linearity of the response coefficient under different photon flux. It also needs to be substituted by the correlation method for extended sources. We propose a centroid estimator based on stream processing, where the center of gravity calculation window floats with the incoming pixel from the detector. In comparison with conventional methods, we show that the proposed estimator simplifies the choice of optimized parameters, provides a unit linear coefficient response, and reduces the influence of background and noise. It is shown that the stream-based centroid estimator also works well for limited size extended sources. A hardware implementation of the proposed estimator is discussed.
NASA Astrophysics Data System (ADS)
Li, Xuxu; Li, Xinyang; wang, Caixia
2018-03-01
This paper proposes an efficient approach to decrease the computational costs of correlation-based centroiding methods used for point source Shack-Hartmann wavefront sensors. Four typical similarity functions have been compared, i.e. the absolute difference function (ADF), ADF square (ADF2), square difference function (SDF), and cross-correlation function (CCF) using the Gaussian spot model. By combining them with fast search algorithms, such as three-step search (TSS), two-dimensional logarithmic search (TDL), cross search (CS), and orthogonal search (OS), computational costs can be reduced drastically without affecting the accuracy of centroid detection. Specifically, OS reduces calculation consumption by 90%. A comprehensive simulation indicates that CCF exhibits a better performance than other functions under various light-level conditions. Besides, the effectiveness of fast search algorithms has been verified.
NASA Astrophysics Data System (ADS)
Yin, Gang; Zhang, Yingtang; Fan, Hongbo; Ren, Guoquan; Li, Zhining
2017-12-01
We have developed a method for automatically detecting UXO-like targets based on magnetic anomaly inversion and self-adaptive fuzzy c-means clustering. Magnetic anomaly inversion methods are used to estimate the initial locations of multiple UXO-like sources. Although these initial locations have some errors with respect to the real positions, they form dense clouds around the actual positions of the magnetic sources. Then we use the self-adaptive fuzzy c-means clustering algorithm to cluster these initial locations. The estimated number of cluster centroids represents the number of targets and the cluster centroids are regarded as the locations of magnetic targets. Effectiveness of the method has been demonstrated using synthetic datasets. Computational results show that the proposed method can be applied to the case of several UXO-like targets that are randomly scattered within in a confined, shallow subsurface, volume. A field test was carried out to test the validity of the proposed method and the experimental results show that the prearranged magnets can be detected unambiguously and located precisely.
Fast clustering using adaptive density peak detection.
Wang, Xiao-Feng; Xu, Yifan
2017-12-01
Common limitations of clustering methods include the slow algorithm convergence, the instability of the pre-specification on a number of intrinsic parameters, and the lack of robustness to outliers. A recent clustering approach proposed a fast search algorithm of cluster centers based on their local densities. However, the selection of the key intrinsic parameters in the algorithm was not systematically investigated. It is relatively difficult to estimate the "optimal" parameters since the original definition of the local density in the algorithm is based on a truncated counting measure. In this paper, we propose a clustering procedure with adaptive density peak detection, where the local density is estimated through the nonparametric multivariate kernel estimation. The model parameter is then able to be calculated from the equations with statistical theoretical justification. We also develop an automatic cluster centroid selection method through maximizing an average silhouette index. The advantage and flexibility of the proposed method are demonstrated through simulation studies and the analysis of a few benchmark gene expression data sets. The method only needs to perform in one single step without any iteration and thus is fast and has a great potential to apply on big data analysis. A user-friendly R package ADPclust is developed for public use.
Wang, Qiang; Liu, Yuefei; Chen, Yiqiang; Ma, Jing; Tan, Liying; Yu, Siyuan
2017-03-01
Accurate location computation for a beacon is an important factor of the reliability of satellite optical communications. However, location precision is generally limited by the resolution of CCD. How to improve the location precision of a beacon is an important and urgent issue. In this paper, we present two precise centroid computation methods for locating a beacon in satellite optical communications. First, in terms of its characteristics, the beacon is divided into several parts according to the gray gradients. Afterward, different numbers of interpolation points and different interpolation methods are applied in the interpolation area; we calculate the centroid position after interpolation and choose the best strategy according to the algorithm. The method is called a "gradient segmentation interpolation approach," or simply, a GSI (gradient segmentation interpolation) algorithm. To take full advantage of the pixels of the beacon's central portion, we also present an improved segmentation square weighting (SSW) algorithm, whose effectiveness is verified by the simulation experiment. Finally, an experiment is established to verify GSI and SSW algorithms. The results indicate that GSI and SSW algorithms can improve locating accuracy over that calculated by a traditional gray centroid method. These approaches help to greatly improve the location precision for a beacon in satellite optical communications.
PCA-LBG-based algorithms for VQ codebook generation
NASA Astrophysics Data System (ADS)
Tsai, Jinn-Tsong; Yang, Po-Yuan
2015-04-01
Vector quantisation (VQ) codebooks are generated by combining principal component analysis (PCA) algorithms with Linde-Buzo-Gray (LBG) algorithms. All training vectors are grouped according to the projected values of the principal components. The PCA-LBG-based algorithms include (1) PCA-LBG-Median, which selects the median vector of each group, (2) PCA-LBG-Centroid, which adopts the centroid vector of each group, and (3) PCA-LBG-Random, which randomly selects a vector of each group. The LBG algorithm finds a codebook based on the better vectors sent to an initial codebook by the PCA. The PCA performs an orthogonal transformation to convert a set of potentially correlated variables into a set of variables that are not linearly correlated. Because the orthogonal transformation efficiently distinguishes test image vectors, the proposed PCA-LBG-based algorithm is expected to outperform conventional algorithms in designing VQ codebooks. The experimental results confirm that the proposed PCA-LBG-based algorithms indeed obtain better results compared to existing methods reported in the literature.
Two-dimensional shape recognition using oriented-polar representation
NASA Astrophysics Data System (ADS)
Hu, Neng-Chung; Yu, Kuo-Kan; Hsu, Yung-Li
1997-10-01
To deal with such a problem as object recognition of position, scale, and rotation invariance (PSRI), we utilize some PSRI properties of images obtained from objects, for example, the centroid of the image. The corresponding position of the centroid to the boundary of the image is invariant in spite of rotation, scale, and translation of the image. To obtain the information of the image, we use the technique similar to Radon transform, called the oriented-polar representation of a 2D image. In this representation, two specific points, the centroid and the weighted mean point, are selected to form an initial ray, then the image is sampled with N angularly equispaced rays departing from the initial rays. Each ray contains a number of intersections and the distance information obtained from the centroid to the intersections. The shape recognition algorithm is based on the least total error of these two items of information. Together with a simple noise removal and a typical backpropagation neural network, this algorithm is simple, but the PSRI is achieved with a high recognition rate.
Iris recognition using image moments and k-means algorithm.
Khan, Yaser Daanial; Khan, Sher Afzal; Ahmad, Farooq; Islam, Saeed
2014-01-01
This paper presents a biometric technique for identification of a person using the iris image. The iris is first segmented from the acquired image of an eye using an edge detection algorithm. The disk shaped area of the iris is transformed into a rectangular form. Described moments are extracted from the grayscale image which yields a feature vector containing scale, rotation, and translation invariant moments. Images are clustered using the k-means algorithm and centroids for each cluster are computed. An arbitrary image is assumed to belong to the cluster whose centroid is the nearest to the feature vector in terms of Euclidean distance computed. The described model exhibits an accuracy of 98.5%.
Iris Recognition Using Image Moments and k-Means Algorithm
Khan, Yaser Daanial; Khan, Sher Afzal; Ahmad, Farooq; Islam, Saeed
2014-01-01
This paper presents a biometric technique for identification of a person using the iris image. The iris is first segmented from the acquired image of an eye using an edge detection algorithm. The disk shaped area of the iris is transformed into a rectangular form. Described moments are extracted from the grayscale image which yields a feature vector containing scale, rotation, and translation invariant moments. Images are clustered using the k-means algorithm and centroids for each cluster are computed. An arbitrary image is assumed to belong to the cluster whose centroid is the nearest to the feature vector in terms of Euclidean distance computed. The described model exhibits an accuracy of 98.5%. PMID:24977221
NASA Astrophysics Data System (ADS)
Veiga, C.; McClelland, J.; Moinuddin, S.; Ricketts, K.; Modat, M.; Ourselin, S.; D'Souza, D.; Royle, G.
2014-03-01
The purpose of this work is to validate an in-house deformable image registration (DIR) algorithm for adaptive radiotherapy for head and neck patients. We aim to use the registrations to estimate the "dose of the day" and assess the need to replan. NiftyReg is an open-source implementation of the B-splines deformable registration algorithm, developed in our institution. We registered a planning CT to a CBCT acquired midway through treatment for 5 HN patients that required replanning. We investigated 16 different parameter settings that previously showed promising results. To assess the registrations, structures delineated in the CT were warped and compared with contours manually drawn by the same clinical expert on the CBCT. This structure set contained vertebral bodies and soft tissue. Dice similarity coefficient (DSC), overlap index (OI), centroid position and distance between structures' surfaces were calculated for every registration, and a set of parameters that produces good results for all datasets was found. We achieve a median value of 0.845 in DSC, 0.889 in OI, error smaller than 2 mm in centroid position and over 90% of the warped surface pixels are distanced less than 2 mm of the manually drawn ones. By using appropriate DIR parameters, we are able to register the planning geometry (pCT) to the daily geometry (CBCT).
An underwater turbulence degraded image restoration algorithm
NASA Astrophysics Data System (ADS)
Furhad, Md. Hasan; Tahtali, Murat; Lambert, Andrew
2017-09-01
Underwater turbulence occurs due to random fluctuations of temperature and salinity in the water. These fluctuations are responsible for variations in water density, refractive index and attenuation. These impose random geometric distortions, spatio-temporal varying blur, limited range visibility and limited contrast on the acquired images. There are some restoration techniques developed to address this problem, such as image registration based, lucky region based and centroid-based image restoration algorithms. Although these methods demonstrate better results in terms of removing turbulence, they require computationally intensive image registration, higher CPU load and memory allocations. Thus, in this paper, a simple patch based dictionary learning algorithm is proposed to restore the image by alleviating the costly image registration step. Dictionary learning is a machine learning technique which builds a dictionary of non-zero atoms derived from the sparse representation of an image or signal. The image is divided into several patches and the sharp patches are detected from them. Next, dictionary learning is performed on these patches to estimate the restored image. Finally, an image deconvolution algorithm is employed on the estimated restored image to remove noise that still exists.
Comparative Analysis of Document level Text Classification Algorithms using R
NASA Astrophysics Data System (ADS)
Syamala, Maganti; Nalini, N. J., Dr; Maguluri, Lakshamanaphaneendra; Ragupathy, R., Dr.
2017-08-01
From the past few decades there has been tremendous volumes of data available in Internet either in structured or unstructured form. Also, there is an exponential growth of information on Internet, so there is an emergent need of text classifiers. Text mining is an interdisciplinary field which draws attention on information retrieval, data mining, machine learning, statistics and computational linguistics. And to handle this situation, a wide range of supervised learning algorithms has been introduced. Among all these K-Nearest Neighbor(KNN) is efficient and simplest classifier in text classification family. But KNN suffers from imbalanced class distribution and noisy term features. So, to cope up with this challenge we use document based centroid dimensionality reduction(CentroidDR) using R Programming. By combining these two text classification techniques, KNN and Centroid classifiers, we propose a scalable and effective flat classifier, called MCenKNN which works well substantially better than CenKNN.
NASA Astrophysics Data System (ADS)
Li, Xiaoliang; Luo, Lei; Li, Pengwei; Yu, Qingkui
2018-03-01
The image sensor in satellite optical communication system may generate noise due to space irradiation damage, leading to deviation for the determination of the light spot centroid. Based on the irradiation test data of CMOS devices, simulated defect spots in different sizes have been used for calculating the centroid deviation value by grey-level centroid algorithm. The impact on tracking & pointing accuracy of the system has been analyzed. The results show that both the amount and the position of irradiation-induced defect pixels contribute to spot centroid deviation. And the larger spot has less deviation. At last, considering the space radiation damage, suggestions are made for the constraints of spot size selection.
Robust Statistical Approaches for RSS-Based Floor Detection in Indoor Localization.
Razavi, Alireza; Valkama, Mikko; Lohan, Elena Simona
2016-05-31
Floor detection for indoor 3D localization of mobile devices is currently an important challenge in the wireless world. Many approaches currently exist, but usually the robustness of such approaches is not addressed or investigated. The goal of this paper is to show how to robustify the floor estimation when probabilistic approaches with a low number of parameters are employed. Indeed, such an approach would allow a building-independent estimation and a lower computing power at the mobile side. Four robustified algorithms are to be presented: a robust weighted centroid localization method, a robust linear trilateration method, a robust nonlinear trilateration method, and a robust deconvolution method. The proposed approaches use the received signal strengths (RSS) measured by the Mobile Station (MS) from various heard WiFi access points (APs) and provide an estimate of the vertical position of the MS, which can be used for floor detection. We will show that robustification can indeed increase the performance of the RSS-based floor detection algorithms.
NASA Astrophysics Data System (ADS)
Basoglu, Burak; Halicioglu, Kerem; Albayrak, Muge; Ulug, Rasit; Tevfik Ozludemir, M.; Deniz, Rasim
2017-04-01
In the last decade, the importance of high-precise geoid determination at local or national level has been pointed out by Turkish National Geodesy Commission. The Commission has also put objective of modernization of national height system of Turkey to the agenda. Meanwhile several projects have been realized in recent years. In Istanbul city, a GNSS/Levelling geoid was defined in 2005 for the metropolitan area of the city with an accuracy of ±3.5cm. In order to achieve a better accuracy in this area, "Local Geoid Determination with Integration of GNSS/Levelling and Astro-Geodetic Data" project has been conducted in Istanbul Technical University and Bogazici University KOERI since January 2016. The project is funded by The Scientific and Technological Research Council of Turkey. With the scope of the project, modernization studies of Digital Zenith Camera System are being carried on in terms of hardware components and software development. Accentuated subjects are the star catalogues, and centroiding algorithm used to identify the stars on the zenithal star field. During the test observations of Digital Zenith Camera System performed between 2013-2016, final results were calculated using the PSF method for star centroiding, and the second USNO CCD Astrograph Catalogue (UCAC2) for the reference star positions. This study aims to investigate the position accuracy of the star images by comparing different centroiding algorithms and available star catalogs used in astro-geodetic observations conducted with the digital zenith camera system.
Hanigan, Ivan; Hall, Gillian; Dear, Keith B G
2006-09-13
To explain the possible effects of exposure to weather conditions on population health outcomes, weather data need to be calculated at a level in space and time that is appropriate for the health data. There are various ways of estimating exposure values from raw data collected at weather stations but the rationale for using one technique rather than another; the significance of the difference in the values obtained; and the effect these have on a research question are factors often not explicitly considered. In this study we compare different techniques for allocating weather data observations to small geographical areas and different options for weighting averages of these observations when calculating estimates of daily precipitation and temperature for Australian Postal Areas. Options that weight observations based on distance from population centroids and population size are more computationally intensive but give estimates that conceptually are more closely related to the experience of the population. Options based on values derived from sites internal to postal areas, or from nearest neighbour sites--that is, using proximity polygons around weather stations intersected with postal areas--tended to include fewer stations' observations in their estimates, and missing values were common. Options based on observations from stations within 50 kilometres radius of centroids and weighting of data by distance from centroids gave more complete estimates. Using the geographic centroid of the postal area gave estimates that differed slightly from the population weighted centroids and the population weighted average of sub-unit estimates. To calculate daily weather exposure values for analysis of health outcome data for small areas, the use of data from weather stations internal to the area only, or from neighbouring weather stations (allocated by the use of proximity polygons), is too limited. The most appropriate method conceptually is the use of weather data from sites within 50 kilometres radius of the area weighted to population centres, but a simpler acceptable option is to weight to the geographic centroid.
Berke, Ethan M; Shi, Xun
2009-04-29
Travel time is an important metric of geographic access to health care. We compared strategies of estimating travel times when only subject ZIP code data were available. Using simulated data from New Hampshire and Arizona, we estimated travel times to nearest cancer centers by using: 1) geometric centroid of ZIP code polygons as origins, 2) population centroids as origin, 3) service area rings around each cancer center, assigning subjects to rings by assuming they are evenly distributed within their ZIP code, 4) service area rings around each center, assuming the subjects follow the population distribution within the ZIP code. We used travel times based on street addresses as true values to validate estimates. Population-based methods have smaller errors than geometry-based methods. Within categories (geometry or population), centroid and service area methods have similar errors. Errors are smaller in urban areas than in rural areas. Population-based methods are superior to the geometry-based methods, with the population centroid method appearing to be the best choice for estimating travel time. Estimates in rural areas are less reliable.
NASA Astrophysics Data System (ADS)
Coleman, J. E.; Ekdahl, C. A.; Moir, D. C.; Sullivan, G. W.; Crawford, M. T.
2014-09-01
Axial beam centroid and beam breakup (BBU) measurements were conducted on an 80 ns FWHM, intense relativistic electron bunch with an injected energy of 3.8 MV and current of 2.9 kA. The intense relativistic electron bunch is accelerated and transported through a nested solenoid and ferrite induction core lattice consisting of 64 elements, exiting the accelerator with a nominal energy of 19.8 MeV. The principal objective of these experiments is to quantify the coupling of the beam centroid motion to the BBU instability and validate the theory of this coupling for the first time. Time resolved centroid measurements indicate a reduction in the BBU amplitude, ⟨ξ⟩, of 19% and a reduction in the BBU growth rate (Γ) of 4% by reducing beam centroid misalignments ˜50% throughout the accelerator. An investigation into the contribution of the misaligned elements is made. An alignment algorithm is presented in addition to a qualitative comparison of experimental and calculated results which include axial beam centroid oscillations, BBU amplitude, and growth with different dipole steering.
Zhang, Junfeng; Chen, Wei; Gao, Mingyi; Shen, Gangxiang
2017-10-30
In this work, we proposed two k-means-clustering-based algorithms to mitigate the fiber nonlinearity for 64-quadrature amplitude modulation (64-QAM) signal, the training-sequence assisted k-means algorithm and the blind k-means algorithm. We experimentally demonstrated the proposed k-means-clustering-based fiber nonlinearity mitigation techniques in 75-Gb/s 64-QAM coherent optical communication system. The proposed algorithms have reduced clustering complexity and low data redundancy and they are able to quickly find appropriate initial centroids and select correctly the centroids of the clusters to obtain the global optimal solutions for large k value. We measured the bit-error-ratio (BER) performance of 64-QAM signal with different launched powers into the 50-km single mode fiber and the proposed techniques can greatly mitigate the signal impairments caused by the amplified spontaneous emission noise and the fiber Kerr nonlinearity and improve the BER performance.
NASA Technical Reports Server (NTRS)
Zaychik, Kirill B.; Cardullo, Frank M.
2012-01-01
Telban and Cardullo have developed and successfully implemented the non-linear optimal motion cueing algorithm at the Visual Motion Simulator (VMS) at the NASA Langley Research Center in 2005. The latest version of the non-linear algorithm performed filtering of motion cues in all degrees-of-freedom except for pitch and roll. This manuscript describes the development and implementation of the non-linear optimal motion cueing algorithm for the pitch and roll degrees of freedom. Presented results indicate improved cues in the specified channels as compared to the original design. To further advance motion cueing in general, this manuscript describes modifications to the existing algorithm, which allow for filtering at the location of the pilot's head as opposed to the centroid of the motion platform. The rational for such modification to the cueing algorithms is that the location of the pilot's vestibular system must be taken into account as opposed to the off-set of the centroid of the cockpit relative to the center of rotation alone. Results provided in this report suggest improved performance of the motion cueing algorithm.
CCD Centroiding Experiment for Correcting a Distorted Image on the Focal Plane
NASA Astrophysics Data System (ADS)
Yano, Taihei; Araki, Hiroshi; Gouda, Naoteru; Kobayashi, Yukiyasu; Tsujimoto, Takuji; Nakajima, Tadashi; Kawano, Nobuyuki; Tazawa, Seiichi; Yamada, Yoshiyuki; Hanada, Hideo; Asari, Kazuyoshi; Tsuruta, Seiitsu
2006-10-01
JASMINE (Japan Astrometry Satellite Mission for Infrared Exploration) and ILOM (In situ Lunar Orientation Measurement) are space missions that are in progress at the National Astronomical Observatory of Japan. These two projects require a common astrometric technique to obtain precise positions of star images on solid-state detectors in order to accomplish their objectives. In the laboratory, we have carried out measurements of the centroid of artificial star images on a CCD array in order to investigate the precision of the positions of the stars, using an algorithm for estimating them from photon-weighted means of the stars. In the calibration of the position of a star image at the focal plane, we have also taken into account the lowest order distortion due to optical aberrations, which is proportional to the cube of the distance from the optical axis. Accordingly, we find that the precision of the measurement for the positions of the stars reaches below 1/100 pixel for one measurement.
Trepalin, Sergey; Osadchiy, Nikolay
2005-01-01
Chemical structure provides exhaustive description of a compound, but it is often proprietary and thus an impediment in the exchange of information. For example, structure disclosure is often needed for the selection of most similar or dissimilar compounds. Authors propose a centroidal algorithm based on structural fragments (screens) that can be efficiently used for the similarity and diversity selections without disclosing structures from the reference set. For an increased security purposes, authors recommend that such set contains at least some tens of structures. Analysis of reverse engineering feasibility showed that the problem difficulty grows with decrease of the screen's radius. The algorithm is illustrated with concrete calculations on known steroidal, quinoline, and quinazoline drugs. We also investigate a problem of scaffold identification in combinatorial library dataset. The results show that relatively small screens of radius equal to 2 bond lengths perform well in the similarity sorting, while radius 4 screens yield better results in diversity sorting. The software implementation of the algorithm taking SDF file with a reference set generates screens of various radii which are subsequently used for the similarity and diversity sorting of external SDFs. Since the reverse engineering of the reference set molecules from their screens has the same difficulty as the RSA asymmetric encryption algorithm, generated screens can be stored openly without further encryption. This approach ensures an end user transfers only a set of structural fragments and no other data. Like other algorithms of encryption, the centroid algorithm cannot give 100% guarantee of protecting a chemical structure from dataset, but probability of initial structure identification is very small-order of 10(-40) in typical cases.
The centroidal algorithm in molecular similarity and diversity calculations on confidential datasets
NASA Astrophysics Data System (ADS)
Trepalin, Sergey; Osadchiy, Nikolay
2005-09-01
Chemical structure provides exhaustive description of a compound, but it is often proprietary and thus an impediment in the exchange of information. For example, structure disclosure is often needed for the selection of most similar or dissimilar compounds. Authors propose a centroidal algorithm based on structural fragments (screens) that can be efficiently used for the similarity and diversity selections without disclosing structures from the reference set. For an increased security purposes, authors recommend that such set contains at least some tens of structures. Analysis of reverse engineering feasibility showed that the problem difficulty grows with decrease of the screen's radius. The algorithm is illustrated with concrete calculations on known steroidal, quinoline, and quinazoline drugs. We also investigate a problem of scaffold identification in combinatorial library dataset. The results show that relatively small screens of radius equal to 2 bond lengths perform well in the similarity sorting, while radius 4 screens yield better results in diversity sorting. The software implementation of the algorithm taking SDF file with a reference set generates screens of various radii which are subsequently used for the similarity and diversity sorting of external SDFs. Since the reverse engineering of the reference set molecules from their screens has the same difficulty as the RSA asymmetric encryption algorithm, generated screens can be stored openly without further encryption. This approach ensures an end user transfers only a set of structural fragments and no other data. Like other algorithms of encryption, the centroid algorithm cannot give 100% guarantee of protecting a chemical structure from dataset, but probability of initial structure identification is very small-order of 10-40 in typical cases.
Centroid-moment tensor inversions using high-rate GPS waveforms
NASA Astrophysics Data System (ADS)
O'Toole, Thomas B.; Valentine, Andrew P.; Woodhouse, John H.
2012-10-01
Displacement time-series recorded by Global Positioning System (GPS) receivers are a new type of near-field waveform observation of the seismic source. We have developed an inversion method which enables the recovery of an earthquake's mechanism and centroid coordinates from such data. Our approach is identical to that of the 'classical' Centroid-Moment Tensor (CMT) algorithm, except that we forward model the seismic wavefield using a method that is amenable to the efficient computation of synthetic GPS seismograms and their partial derivatives. We demonstrate the validity of our approach by calculating CMT solutions using 1 Hz GPS data for two recent earthquakes in Japan. These results are in good agreement with independently determined source models of these events. With wider availability of data, we envisage the CMT algorithm providing a tool for the systematic inversion of GPS waveforms, as is already the case for teleseismic data. Furthermore, this general inversion method could equally be applied to other near-field earthquake observations such as those made using accelerometers.
Spatial pattern recognition of seismic events in South West Colombia
NASA Astrophysics Data System (ADS)
Benítez, Hernán D.; Flórez, Juan F.; Duque, Diana P.; Benavides, Alberto; Lucía Baquero, Olga; Quintero, Jiber
2013-09-01
Recognition of seismogenic zones in geographical regions supports seismic hazard studies. This recognition is usually based on visual, qualitative and subjective analysis of data. Spatial pattern recognition provides a well founded means to obtain relevant information from large amounts of data. The purpose of this work is to identify and classify spatial patterns in instrumental data of the South West Colombian seismic database. In this research, clustering tendency analysis validates whether seismic database possesses a clustering structure. A non-supervised fuzzy clustering algorithm creates groups of seismic events. Given the sensitivity of fuzzy clustering algorithms to centroid initial positions, we proposed a methodology to initialize centroids that generates stable partitions with respect to centroid initialization. As a result of this work, a public software tool provides the user with the routines developed for clustering methodology. The analysis of the seismogenic zones obtained reveals meaningful spatial patterns in South-West Colombia. The clustering analysis provides a quantitative location and dispersion of seismogenic zones that facilitates seismological interpretations of seismic activities in South West Colombia.
Noise-enhanced clustering and competitive learning algorithms.
Osoba, Osonde; Kosko, Bart
2013-01-01
Noise can provably speed up convergence in many centroid-based clustering algorithms. This includes the popular k-means clustering algorithm. The clustering noise benefit follows from the general noise benefit for the expectation-maximization algorithm because many clustering algorithms are special cases of the expectation-maximization algorithm. Simulations show that noise also speeds up convergence in stochastic unsupervised competitive learning, supervised competitive learning, and differential competitive learning. Copyright © 2012 Elsevier Ltd. All rights reserved.
Adaptive fuzzy leader clustering of complex data sets in pattern recognition
NASA Technical Reports Server (NTRS)
Newton, Scott C.; Pemmaraju, Surya; Mitra, Sunanda
1992-01-01
A modular, unsupervised neural network architecture for clustering and classification of complex data sets is presented. The adaptive fuzzy leader clustering (AFLC) architecture is a hybrid neural-fuzzy system that learns on-line in a stable and efficient manner. The initial classification is performed in two stages: a simple competitive stage and a distance metric comparison stage. The cluster prototypes are then incrementally updated by relocating the centroid positions from fuzzy C-means system equations for the centroids and the membership values. The AFLC algorithm is applied to the Anderson Iris data and laser-luminescent fingerprint image data. It is concluded that the AFLC algorithm successfully classifies features extracted from real data, discrete or continuous.
Are judgments a form of data clustering? Reexamining contrast effects with the k-means algorithm.
Boillaud, Eric; Molina, Guylaine
2015-04-01
A number of theories have been proposed to explain in precise mathematical terms how statistical parameters and sequential properties of stimulus distributions affect category ratings. Various contextual factors such as the mean, the midrange, and the median of the stimuli; the stimulus range; the percentile rank of each stimulus; and the order of appearance have been assumed to influence judgmental contrast. A data clustering reinterpretation of judgmental relativity is offered wherein the influence of the initial choice of centroids on judgmental contrast involves 2 combined frequency and consistency tendencies. Accounts of the k-means algorithm are provided, showing good agreement with effects observed on multiple distribution shapes and with a variety of interaction effects relating to the number of stimuli, the number of response categories, and the method of skewing. Experiment 1 demonstrates that centroid initialization accounts for contrast effects obtained with stretched distributions. Experiment 2 demonstrates that the iterative convergence inherent to the k-means algorithm accounts for the contrast reduction observed across repeated blocks of trials. The concept of within-cluster variance minimization is discussed, as is the applicability of a backward k-means calculation method for inferring, from empirical data, the values of the centroids that would serve as a representation of the judgmental context. (c) 2015 APA, all rights reserved.
Yuan, Xin; Martínez, José-Fernán; Eckert, Martina; López-Santidrián, Lourdes
2016-01-01
The main focus of this paper is on extracting features with SOund Navigation And Ranging (SONAR) sensing for further underwater landmark-based Simultaneous Localization and Mapping (SLAM). According to the characteristics of sonar images, in this paper, an improved Otsu threshold segmentation method (TSM) has been developed for feature detection. In combination with a contour detection algorithm, the foreground objects, although presenting different feature shapes, are separated much faster and more precisely than by other segmentation methods. Tests have been made with side-scan sonar (SSS) and forward-looking sonar (FLS) images in comparison with other four TSMs, namely the traditional Otsu method, the local TSM, the iterative TSM and the maximum entropy TSM. For all the sonar images presented in this work, the computational time of the improved Otsu TSM is much lower than that of the maximum entropy TSM, which achieves the highest segmentation precision among the four above mentioned TSMs. As a result of the segmentations, the centroids of the main extracted regions have been computed to represent point landmarks which can be used for navigation, e.g., with the help of an Augmented Extended Kalman Filter (AEKF)-based SLAM algorithm. The AEKF-SLAM approach is a recursive and iterative estimation-update process, which besides a prediction and an update stage (as in classical Extended Kalman Filter (EKF)), includes an augmentation stage. During navigation, the robot localizes the centroids of different segments of features in sonar images, which are detected by our improved Otsu TSM, as point landmarks. Using them with the AEKF achieves more accurate and robust estimations of the robot pose and the landmark positions, than with those detected by the maximum entropy TSM. Together with the landmarks identified by the proposed segmentation algorithm, the AEKF-SLAM has achieved reliable detection of cycles in the map and consistent map update on loop closure, which is shown in simulated experiments. PMID:27455279
Yuan, Xin; Martínez, José-Fernán; Eckert, Martina; López-Santidrián, Lourdes
2016-07-22
The main focus of this paper is on extracting features with SOund Navigation And Ranging (SONAR) sensing for further underwater landmark-based Simultaneous Localization and Mapping (SLAM). According to the characteristics of sonar images, in this paper, an improved Otsu threshold segmentation method (TSM) has been developed for feature detection. In combination with a contour detection algorithm, the foreground objects, although presenting different feature shapes, are separated much faster and more precisely than by other segmentation methods. Tests have been made with side-scan sonar (SSS) and forward-looking sonar (FLS) images in comparison with other four TSMs, namely the traditional Otsu method, the local TSM, the iterative TSM and the maximum entropy TSM. For all the sonar images presented in this work, the computational time of the improved Otsu TSM is much lower than that of the maximum entropy TSM, which achieves the highest segmentation precision among the four above mentioned TSMs. As a result of the segmentations, the centroids of the main extracted regions have been computed to represent point landmarks which can be used for navigation, e.g., with the help of an Augmented Extended Kalman Filter (AEKF)-based SLAM algorithm. The AEKF-SLAM approach is a recursive and iterative estimation-update process, which besides a prediction and an update stage (as in classical Extended Kalman Filter (EKF)), includes an augmentation stage. During navigation, the robot localizes the centroids of different segments of features in sonar images, which are detected by our improved Otsu TSM, as point landmarks. Using them with the AEKF achieves more accurate and robust estimations of the robot pose and the landmark positions, than with those detected by the maximum entropy TSM. Together with the landmarks identified by the proposed segmentation algorithm, the AEKF-SLAM has achieved reliable detection of cycles in the map and consistent map update on loop closure, which is shown in simulated experiments.
Accurate beacon positioning method for satellite-to-ground optical communication.
Wang, Qiang; Tong, Ling; Yu, Siyuan; Tan, Liying; Ma, Jing
2017-12-11
In satellite laser communication systems, accurate positioning of the beacon is essential for establishing a steady laser communication link. For satellite-to-ground optical communication, the main influencing factors on the acquisition of the beacon are background noise and atmospheric turbulence. In this paper, we consider the influence of background noise and atmospheric turbulence on the beacon in satellite-to-ground optical communication, and propose a new locating algorithm for the beacon, which takes the correlation coefficient obtained by curve fitting for image data as weights. By performing a long distance laser communication experiment (11.16 km), we verified the feasibility of this method. Both simulation and experiment showed that the new algorithm can accurately obtain the position of the centroid of beacon. Furthermore, for the distortion of the light spot through atmospheric turbulence, the locating accuracy of the new algorithm was 50% higher than that of the conventional gray centroid algorithm. This new approach will be beneficial for the design of satellite-to ground optical communication systems.
Modification of a rainfall-runoff model for distributed modeling in a GIS and its validation
NASA Astrophysics Data System (ADS)
Nyabeze, W. R.
A rainfall-runoff model, which can be inter-faced with a Geographical Information System (GIS) to integrate definition, measurement, calculating parameter values for spatial features, presents considerable advantages. The modification of the GWBasic Wits Rainfall-Runoff Erosion Model (GWBRafler) to enable parameter value estimation in a GIS (GISRafler) is presented in this paper. Algorithms are applied to estimate parameter values reducing the number of input parameters and the effort to populate them. The use of a GIS makes the relationship between parameter estimates and cover characteristics more evident. This paper has been produced as part of research to generalize the GWBRafler on a spatially distributed basis. Modular data structures are assumed and parameter values are weighted relative to the module area and centroid properties. Modifications to the GWBRafler enable better estimation of low flows, which are typical in drought conditions.
Time resolving beam position measurement and analysis of beam unstable movement in PSR
NASA Astrophysics Data System (ADS)
Aleksandrov, A. V.
2000-11-01
Precise measurement of beam centroid movement is very important for understanding the fast transverse instability in the Los Alamos Proton Storage Ring (PSR). Proton bunch in the PSR is long thus different parts of the bunch can have different betatron phase and move differently therefore time resolving position measurement is needed. Wide band strip line BPM can be adequate if proper processing algorithm is used. In this work we present the results of the analysis of unstable transverse beam motion using time resolving processing algorithm. Suggested algorithm allows to calculate transverse position of different parts of the beam on each turn, then beam centroid movement on successive turns can be developed in series of plane travelling waves in the beam frame of reference thus providing important information on instability development. Some general features of fast transverse instability, unknown before, are discovered.
A physics-motivated Centroidal Voronoi Particle domain decomposition method
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fu, Lin, E-mail: lin.fu@tum.de; Hu, Xiangyu Y., E-mail: xiangyu.hu@tum.de; Adams, Nikolaus A., E-mail: nikolaus.adams@tum.de
2017-04-15
In this paper, we propose a novel domain decomposition method for large-scale simulations in continuum mechanics by merging the concepts of Centroidal Voronoi Tessellation (CVT) and Voronoi Particle dynamics (VP). The CVT is introduced to achieve a high-level compactness of the partitioning subdomains by the Lloyd algorithm which monotonically decreases the CVT energy. The number of computational elements between neighboring partitioning subdomains, which scales the communication effort for parallel simulations, is optimized implicitly as the generated partitioning subdomains are convex and simply connected with small aspect-ratios. Moreover, Voronoi Particle dynamics employing physical analogy with a tailored equation of state ismore » developed, which relaxes the particle system towards the target partition with good load balance. Since the equilibrium is computed by an iterative approach, the partitioning subdomains exhibit locality and the incremental property. Numerical experiments reveal that the proposed Centroidal Voronoi Particle (CVP) based algorithm produces high-quality partitioning with high efficiency, independently of computational-element types. Thus it can be used for a wide range of applications in computational science and engineering.« less
A physics-motivated Centroidal Voronoi Particle domain decomposition method
NASA Astrophysics Data System (ADS)
Fu, Lin; Hu, Xiangyu Y.; Adams, Nikolaus A.
2017-04-01
In this paper, we propose a novel domain decomposition method for large-scale simulations in continuum mechanics by merging the concepts of Centroidal Voronoi Tessellation (CVT) and Voronoi Particle dynamics (VP). The CVT is introduced to achieve a high-level compactness of the partitioning subdomains by the Lloyd algorithm which monotonically decreases the CVT energy. The number of computational elements between neighboring partitioning subdomains, which scales the communication effort for parallel simulations, is optimized implicitly as the generated partitioning subdomains are convex and simply connected with small aspect-ratios. Moreover, Voronoi Particle dynamics employing physical analogy with a tailored equation of state is developed, which relaxes the particle system towards the target partition with good load balance. Since the equilibrium is computed by an iterative approach, the partitioning subdomains exhibit locality and the incremental property. Numerical experiments reveal that the proposed Centroidal Voronoi Particle (CVP) based algorithm produces high-quality partitioning with high efficiency, independently of computational-element types. Thus it can be used for a wide range of applications in computational science and engineering.
Localized Ambient Solidity Separation Algorithm Based Computer User Segmentation.
Sun, Xiao; Zhang, Tongda; Chai, Yueting; Liu, Yi
2015-01-01
Most of popular clustering methods typically have some strong assumptions of the dataset. For example, the k-means implicitly assumes that all clusters come from spherical Gaussian distributions which have different means but the same covariance. However, when dealing with datasets that have diverse distribution shapes or high dimensionality, these assumptions might not be valid anymore. In order to overcome this weakness, we proposed a new clustering algorithm named localized ambient solidity separation (LASS) algorithm, using a new isolation criterion called centroid distance. Compared with other density based isolation criteria, our proposed centroid distance isolation criterion addresses the problem caused by high dimensionality and varying density. The experiment on a designed two-dimensional benchmark dataset shows that our proposed LASS algorithm not only inherits the advantage of the original dissimilarity increments clustering method to separate naturally isolated clusters but also can identify the clusters which are adjacent, overlapping, and under background noise. Finally, we compared our LASS algorithm with the dissimilarity increments clustering method on a massive computer user dataset with over two million records that contains demographic and behaviors information. The results show that LASS algorithm works extremely well on this computer user dataset and can gain more knowledge from it.
Identify High-Quality Protein Structural Models by Enhanced K-Means.
Wu, Hongjie; Li, Haiou; Jiang, Min; Chen, Cheng; Lv, Qiang; Wu, Chuang
2017-01-01
Background. One critical issue in protein three-dimensional structure prediction using either ab initio or comparative modeling involves identification of high-quality protein structural models from generated decoys. Currently, clustering algorithms are widely used to identify near-native models; however, their performance is dependent upon different conformational decoys, and, for some algorithms, the accuracy declines when the decoy population increases. Results. Here, we proposed two enhanced K -means clustering algorithms capable of robustly identifying high-quality protein structural models. The first one employs the clustering algorithm SPICKER to determine the initial centroids for basic K -means clustering ( SK -means), whereas the other employs squared distance to optimize the initial centroids ( K -means++). Our results showed that SK -means and K -means++ were more robust as compared with SPICKER alone, detecting 33 (59%) and 42 (75%) of 56 targets, respectively, with template modeling scores better than or equal to those of SPICKER. Conclusions. We observed that the classic K -means algorithm showed a similar performance to that of SPICKER, which is a widely used algorithm for protein-structure identification. Both SK -means and K -means++ demonstrated substantial improvements relative to results from SPICKER and classical K -means.
Identify High-Quality Protein Structural Models by Enhanced K-Means
Li, Haiou; Chen, Cheng; Lv, Qiang; Wu, Chuang
2017-01-01
Background. One critical issue in protein three-dimensional structure prediction using either ab initio or comparative modeling involves identification of high-quality protein structural models from generated decoys. Currently, clustering algorithms are widely used to identify near-native models; however, their performance is dependent upon different conformational decoys, and, for some algorithms, the accuracy declines when the decoy population increases. Results. Here, we proposed two enhanced K-means clustering algorithms capable of robustly identifying high-quality protein structural models. The first one employs the clustering algorithm SPICKER to determine the initial centroids for basic K-means clustering (SK-means), whereas the other employs squared distance to optimize the initial centroids (K-means++). Our results showed that SK-means and K-means++ were more robust as compared with SPICKER alone, detecting 33 (59%) and 42 (75%) of 56 targets, respectively, with template modeling scores better than or equal to those of SPICKER. Conclusions. We observed that the classic K-means algorithm showed a similar performance to that of SPICKER, which is a widely used algorithm for protein-structure identification. Both SK-means and K-means++ demonstrated substantial improvements relative to results from SPICKER and classical K-means. PMID:28421198
Localized Ambient Solidity Separation Algorithm Based Computer User Segmentation
Sun, Xiao; Zhang, Tongda; Chai, Yueting; Liu, Yi
2015-01-01
Most of popular clustering methods typically have some strong assumptions of the dataset. For example, the k-means implicitly assumes that all clusters come from spherical Gaussian distributions which have different means but the same covariance. However, when dealing with datasets that have diverse distribution shapes or high dimensionality, these assumptions might not be valid anymore. In order to overcome this weakness, we proposed a new clustering algorithm named localized ambient solidity separation (LASS) algorithm, using a new isolation criterion called centroid distance. Compared with other density based isolation criteria, our proposed centroid distance isolation criterion addresses the problem caused by high dimensionality and varying density. The experiment on a designed two-dimensional benchmark dataset shows that our proposed LASS algorithm not only inherits the advantage of the original dissimilarity increments clustering method to separate naturally isolated clusters but also can identify the clusters which are adjacent, overlapping, and under background noise. Finally, we compared our LASS algorithm with the dissimilarity increments clustering method on a massive computer user dataset with over two million records that contains demographic and behaviors information. The results show that LASS algorithm works extremely well on this computer user dataset and can gain more knowledge from it. PMID:26221133
360-degrees profilometry using strip-light projection coupled to Fourier phase-demodulation.
Servin, Manuel; Padilla, Moises; Garnica, Guillermo
2016-01-11
360 degrees (360°) digitalization of three dimensional (3D) solids using a projected light-strip is a well-established technique in academic and commercial profilometers. These profilometers project a light-strip over the digitizing solid while the solid is rotated a full revolution or 360-degrees. Then, a computer program typically extracts the centroid of this light-strip, and by triangulation one obtains the shape of the solid. Here instead of using intensity-based light-strip centroid estimation, we propose to use Fourier phase-demodulation for 360° solid digitalization. The advantage of Fourier demodulation over strip-centroid estimation is that the accuracy of phase-demodulation linearly-increases with the fringe density, while in strip-light the centroid-estimation errors are independent. Here we proposed first to construct a carrier-frequency fringe-pattern by closely adding the individual light-strip images recorded while the solid is being rotated. Next, this high-density fringe-pattern is phase-demodulated using the standard Fourier technique. To test the feasibility of this Fourier demodulation approach, we have digitized two solids with increasing topographic complexity: a Rubik's cube and a plastic model of a human-skull. According to our results, phase demodulation based on the Fourier technique is less noisy than triangulation based on centroid light-strip estimation. Moreover, Fourier demodulation also provides the amplitude of the analytic signal which is a valuable information for the visualization of surface details.
Development of a novel constellation based landmark detection algorithm
NASA Astrophysics Data System (ADS)
Ghayoor, Ali; Vaidya, Jatin G.; Johnson, Hans J.
2013-03-01
Anatomical landmarks such as the anterior commissure (AC) and posterior commissure (PC) are commonly used by researchers for co-registration of images. In this paper, we present a novel, automated approach for landmark detection that combines morphometric constraining and statistical shape models to provide accurate estimation of landmark points. This method is made robust to large rotations in initial head orientation by extracting extra information of the eye centers using a radial Hough transform and exploiting the centroid of head mass (CM) using a novel estimation approach. To evaluate the effectiveness of this method, the algorithm is trained on a set of 20 images with manually selected landmarks, and a test dataset is used to compare the automatically detected against the manually detected landmark locations of the AC, PC, midbrain-pons junction (MPJ), and fourth ventricle notch (VN4). The results show that the proposed method is accurate as the average error between the automatically and manually labeled landmark points is less than 1 mm. Also, the algorithm is highly robust as it was successfully run on a large dataset that included different kinds of images with various orientation, spacing, and origin.
Improved and Robust Detection of Cell Nuclei from Four Dimensional Fluorescence Images
Bashar, Md. Khayrul; Yamagata, Kazuo; Kobayashi, Tetsuya J.
2014-01-01
Segmentation-free direct methods are quite efficient for automated nuclei extraction from high dimensional images. A few such methods do exist but most of them do not ensure algorithmic robustness to parameter and noise variations. In this research, we propose a method based on multiscale adaptive filtering for efficient and robust detection of nuclei centroids from four dimensional (4D) fluorescence images. A temporal feedback mechanism is employed between the enhancement and the initial detection steps of a typical direct method. We estimate the minimum and maximum nuclei diameters from the previous frame and feed back them as filter lengths for multiscale enhancement of the current frame. A radial intensity-gradient function is optimized at positions of initial centroids to estimate all nuclei diameters. This procedure continues for processing subsequent images in the sequence. Above mechanism thus ensures proper enhancement by automated estimation of major parameters. This brings robustness and safeguards the system against additive noises and effects from wrong parameters. Later, the method and its single-scale variant are simplified for further reduction of parameters. The proposed method is then extended for nuclei volume segmentation. The same optimization technique is applied to final centroid positions of the enhanced image and the estimated diameters are projected onto the binary candidate regions to segment nuclei volumes.Our method is finally integrated with a simple sequential tracking approach to establish nuclear trajectories in the 4D space. Experimental evaluations with five image-sequences (each having 271 3D sequential images) corresponding to five different mouse embryos show promising performances of our methods in terms of nuclear detection, segmentation, and tracking. A detail analysis with a sub-sequence of 101 3D images from an embryo reveals that the proposed method can improve the nuclei detection accuracy by 9 over the previous methods, which used inappropriate large valued parameters. Results also confirm that the proposed method and its variants achieve high detection accuracies ( 98 mean F-measure) irrespective of the large variations of filter parameters and noise levels. PMID:25020042
A system for learning statistical motion patterns.
Hu, Weiming; Xiao, Xuejuan; Fu, Zhouyu; Xie, Dan; Tan, Tieniu; Maybank, Steve
2006-09-01
Analysis of motion patterns is an effective approach for anomaly detection and behavior prediction. Current approaches for the analysis of motion patterns depend on known scenes, where objects move in predefined ways. It is highly desirable to automatically construct object motion patterns which reflect the knowledge of the scene. In this paper, we present a system for automatically learning motion patterns for anomaly detection and behavior prediction based on a proposed algorithm for robustly tracking multiple objects. In the tracking algorithm, foreground pixels are clustered using a fast accurate fuzzy K-means algorithm. Growing and prediction of the cluster centroids of foreground pixels ensure that each cluster centroid is associated with a moving object in the scene. In the algorithm for learning motion patterns, trajectories are clustered hierarchically using spatial and temporal information and then each motion pattern is represented with a chain of Gaussian distributions. Based on the learned statistical motion patterns, statistical methods are used to detect anomalies and predict behaviors. Our system is tested using image sequences acquired, respectively, from a crowded real traffic scene and a model traffic scene. Experimental results show the robustness of the tracking algorithm, the efficiency of the algorithm for learning motion patterns, and the encouraging performance of algorithms for anomaly detection and behavior prediction.
Lamberti, Alfredo; Vanlanduit, Steve; De Pauw, Ben; Berghmans, Francis
2014-01-01
The working principle of fiber Bragg grating (FBG) sensors is mostly based on the tracking of the Bragg wavelength shift. To accomplish this task, different algorithms have been proposed, from conventional maximum and centroid detection algorithms to more recently-developed correlation-based techniques. Several studies regarding the performance of these algorithms have been conducted, but they did not take into account spectral distortions, which appear in many practical applications. This paper addresses this issue and analyzes the performance of four different wavelength tracking algorithms (maximum detection, centroid detection, cross-correlation and fast phase-correlation) when applied to distorted FBG spectra used for measuring dynamic loads. Both simulations and experiments are used for the analyses. The dynamic behavior of distorted FBG spectra is simulated using the transfer-matrix approach, and the amount of distortion of the spectra is quantified using dedicated distortion indices. The algorithms are compared in terms of achievable precision and accuracy. To corroborate the simulation results, experiments were conducted using three FBG sensors glued on a steel plate and subjected to a combination of transverse force and vibration loads. The analysis of the results showed that the fast phase-correlation algorithm guarantees the best combination of versatility, precision and accuracy. PMID:25521386
A fast algorithm to compute precise type-2 centroids for real-time control applications.
Chakraborty, Sumantra; Konar, Amit; Ralescu, Anca; Pal, Nikhil R
2015-02-01
An interval type-2 fuzzy set (IT2 FS) is characterized by its upper and lower membership functions containing all possible embedded fuzzy sets, which together is referred to as the footprint of uncertainty (FOU). The FOU results in a span of uncertainty measured in the defuzzified space and is determined by the positional difference of the centroids of all the embedded fuzzy sets taken together. This paper provides a closed-form formula to evaluate the span of uncertainty of an IT2 FS. The closed-form formula offers a precise measurement of the degree of uncertainty in an IT2 FS with a runtime complexity less than that of the classical iterative Karnik-Mendel algorithm and other formulations employing the iterative Newton-Raphson algorithm. This paper also demonstrates a real-time control application using the proposed closed-form formula of centroids with reduced root mean square error and computational overhead than those of the existing methods. Computer simulations for this real-time control application indicate that parallel realization of the IT2 defuzzification outperforms its competitors with respect to maximum overshoot even at high sampling rates. Furthermore, in the presence of measurement noise in system (plant) states, the proposed IT2 FS based scheme outperforms its type-1 counterpart with respect to peak overshoot and root mean square error in plant response.
Kinematic model for the space-variant image motion of star sensors under dynamical conditions
NASA Astrophysics Data System (ADS)
Liu, Chao-Shan; Hu, Lai-Hong; Liu, Guang-Bin; Yang, Bo; Li, Ai-Jun
2015-06-01
A kinematic description of a star spot in the focal plane is presented for star sensors under dynamical conditions, which involves all necessary parameters such as the image motion, velocity, and attitude parameters of the vehicle. Stars at different locations of the focal plane correspond to the slightly different orientation and extent of motion blur, which characterize the space-variant point spread function. Finally, the image motion, the energy distribution, and centroid extraction are numerically investigated using the kinematic model under dynamic conditions. A centroid error of eight successive iterations <0.002 pixel is used as the termination criterion for the Richardson-Lucy deconvolution algorithm. The kinematic model of a star sensor is useful for evaluating the compensation algorithms of motion-blurred images.
Lung tumor tracking in fluoroscopic video based on optical flow
Xu, Qianyi; Hamilton, Russell J.; Schowengerdt, Robert A.; Alexander, Brian; Jiang, Steve B.
2008-01-01
Respiratory gating and tumor tracking for dynamic multileaf collimator delivery require accurate and real-time localization of the lung tumor position during treatment. Deriving tumor position from external surrogates such as abdominal surface motion may have large uncertainties due to the intra- and interfraction variations of the correlation between the external surrogates and internal tumor motion. Implanted fiducial markers can be used to track tumors fluoroscopically in real time with sufficient accuracy. However, it may not be a practical procedure when implanting fiducials bronchoscopically. In this work, a method is presented to track the lung tumor mass or relevant anatomic features projected in fluoroscopic images without implanted fiducial markers based on an optical flow algorithm. The algorithm generates the centroid position of the tracked target and ignores shape changes of the tumor mass shadow. The tracking starts with a segmented tumor projection in an initial image frame. Then, the optical flow between this and all incoming frames acquired during treatment delivery is computed as initial estimations of tumor centroid displacements. The tumor contour in the initial frame is transferred to the incoming frames based on the average of the motion vectors, and its positions in the incoming frames are determined by fine-tuning the contour positions using a template matching algorithm with a small search range. The tracking results were validated by comparing with clinician determined contours on each frame. The position difference in 95% of the frames was found to be less than 1.4 pixels (∼0.7 mm) in the best case and 2.8 pixels (∼1.4 mm) in the worst case for the five patients studied. PMID:19175094
Lung tumor tracking in fluoroscopic video based on optical flow.
Xu, Qianyi; Hamilton, Russell J; Schowengerdt, Robert A; Alexander, Brian; Jiang, Steve B
2008-12-01
Respiratory gating and tumor tracking for dynamic multileaf collimator delivery require accurate and real-time localization of the lung tumor position during treatment. Deriving tumor position from external surrogates such as abdominal surface motion may have large uncertainties due to the intra- and interfraction variations of the correlation between the external surrogates and internal tumor motion. Implanted fiducial markers can be used to track tumors fluoroscopically in real time with sufficient accuracy. However, it may not be a practical procedure when implanting fiducials bronchoscopically. In this work, a method is presented to track the lung tumor mass or relevant anatomic features projected in fluoroscopic images without implanted fiducial markers based on an optical flow algorithm. The algorithm generates the centroid position of the tracked target and ignores shape changes of the tumor mass shadow. The tracking starts with a segmented tumor projection in an initial image frame. Then, the optical flow between this and all incoming frames acquired during treatment delivery is computed as initial estimations of tumor centroid displacements. The tumor contour in the initial frame is transferred to the incoming frames based on the average of the motion vectors, and its positions in the incoming frames are determined by fine-tuning the contour positions using a template matching algorithm with a small search range. The tracking results were validated by comparing with clinician determined contours on each frame. The position difference in 95% of the frames was found to be less than 1.4 pixels (approximately 0.7 mm) in the best case and 2.8 pixels (approximately 1.4 mm) in the worst case for the five patients studied.
Algorithms used in the Airborne Lidar Processing System (ALPS)
Nagle, David B.; Wright, C. Wayne
2016-05-23
The Airborne Lidar Processing System (ALPS) analyzes Experimental Advanced Airborne Research Lidar (EAARL) data—digitized laser-return waveforms, position, and attitude data—to derive point clouds of target surfaces. A full-waveform airborne lidar system, the EAARL seamlessly and simultaneously collects mixed environment data, including submerged, sub-aerial bare earth, and vegetation-covered topographies.ALPS uses three waveform target-detection algorithms to determine target positions within a given waveform: centroid analysis, leading edge detection, and bottom detection using water-column backscatter modeling. The centroid analysis algorithm detects opaque hard surfaces. The leading edge algorithm detects topography beneath vegetation and shallow, submerged topography. The bottom detection algorithm uses water-column backscatter modeling for deeper submerged topography in turbid water.The report describes slant range calculations and explains how ALPS uses laser range and orientation measurements to project measurement points into the Universal Transverse Mercator coordinate system. Parameters used for coordinate transformations in ALPS are described, as are Interactive Data Language-based methods for gridding EAARL point cloud data to derive digital elevation models. Noise reduction in point clouds through use of a random consensus filter is explained, and detailed pseudocode, mathematical equations, and Yorick source code accompany the report.
NASA Astrophysics Data System (ADS)
Han, Cheongho; Jeong, Youngjin; Kim, Ho-Il
1998-11-01
Recently Alard, Mao, & Guibert and Alard proposed to detect the shift of a star's image centroid, δx, as a method to identify the lensed source among blended stars. Goldberg & Woźniak actually applied this method to the OGLE-1 database and found that seven of 15 events showed significant centroid shifts of δx >~ 0.2". The amount of centroid shift has been estimated theoretically by Goldberg; however, he treated the problem in general and did not apply it to a particular survey or field and therefore based his estimate on simple toy model luminosity functions (i.e., power laws). In this paper, we construct the expected distribution of δx for Galactic bulge events based on the precise stellar luminosity function observed by Holtzman et al. using the Hubble Space Telescope. Their luminosity function is complete up to MI ~ 9.0 (MV ~ 12), which corresponds to faint M-type stars. In our analysis we find that regular blending cannot produce a large fraction of events with measurable centroid shifts. By contrast, a significant fraction of events would have measurable centroid shifts if they are affected by amplification-bias blending. Therefore, the measurements of large centroid shifts for an important fraction of microlensing events of Goldberg & Woźniak confirm the prediction of Han & Alard that a large fraction of Galactic bulge events are affected by amplification-bias blending.
NASA Technical Reports Server (NTRS)
Bremmer, David M.; Hutcheson, Florence V.; Stead, Daniel J.
2005-01-01
A methodology to eliminate model reflection and system vibration effects from post processed particle image velocimetry data is presented. Reflection and vibration lead to loss of data, and biased velocity calculations in PIV processing. A series of algorithms were developed to alleviate these problems. Reflections emanating from the model surface caused by the laser light sheet are removed from the PIV images by subtracting an image in which only the reflections are visible from all of the images within a data acquisition set. The result is a set of PIV images where only the seeded particles are apparent. Fiduciary marks painted on the surface of the test model were used as reference points in the images. By locating the centroids of these marks it was possible to shift all of the images to a common reference frame. This image alignment procedure as well as the subtraction of model reflection are performed in a first algorithm. Once the images have been shifted, they are compared with a background image that was recorded under no flow conditions. The second and third algorithms find the coordinates of fiduciary marks in the acquisition set images and the background image and calculate the displacement between these images. The final algorithm shifts all of the images so that fiduciary mark centroids lie in the same location as the background image centroids. This methodology effectively eliminated the effects of vibration so that unbiased data could be used for PIV processing. The PIV data used for this work was generated at the NASA Langley Research Center Quiet Flow Facility. The experiment entailed flow visualization near the flap side edge region of an airfoil model. Commercial PIV software was used for data acquisition and processing. In this paper, the experiment and the PIV acquisition of the data are described. The methodology used to develop the algorithms for reflection and system vibration removal is stated, and the implementation, testing and validation of these algorithms are presented.
Fast and fully automatic phalanx segmentation using a grayscale-histogram morphology algorithm
NASA Astrophysics Data System (ADS)
Hsieh, Chi-Wen; Liu, Tzu-Chiang; Jong, Tai-Lang; Chen, Chih-Yen; Tiu, Chui-Mei; Chan, Din-Yuen
2011-08-01
Bone age assessment is a common radiological examination used in pediatrics to diagnose the discrepancy between the skeletal and chronological age of a child; therefore, it is beneficial to develop a computer-based bone age assessment to help junior pediatricians estimate bone age easily. Unfortunately, the phalanx on radiograms is not easily separated from the background and soft tissue. Therefore, we proposed a new method, called the grayscale-histogram morphology algorithm, to segment the phalanges fast and precisely. The algorithm includes three parts: a tri-stage sieve algorithm used to eliminate the background of hand radiograms, a centroid-edge dual scanning algorithm to frame the phalanx region, and finally a segmentation algorithm based on disk traverse-subtraction filter to segment the phalanx. Moreover, two more segmentation methods: adaptive two-mean and adaptive two-mean clustering were performed, and their results were compared with the segmentation algorithm based on disk traverse-subtraction filter using five indices comprising misclassification error, relative foreground area error, modified Hausdorff distances, edge mismatch, and region nonuniformity. In addition, the CPU time of the three segmentation methods was discussed. The result showed that our method had a better performance than the other two methods. Furthermore, satisfactory segmentation results were obtained with a low standard error.
Generalized Centroid Estimators in Bioinformatics
Hamada, Michiaki; Kiryu, Hisanori; Iwasaki, Wataru; Asai, Kiyoshi
2011-01-01
In a number of estimation problems in bioinformatics, accuracy measures of the target problem are usually given, and it is important to design estimators that are suitable to those accuracy measures. However, there is often a discrepancy between an employed estimator and a given accuracy measure of the problem. In this study, we introduce a general class of efficient estimators for estimation problems on high-dimensional binary spaces, which represent many fundamental problems in bioinformatics. Theoretical analysis reveals that the proposed estimators generally fit with commonly-used accuracy measures (e.g. sensitivity, PPV, MCC and F-score) as well as it can be computed efficiently in many cases, and cover a wide range of problems in bioinformatics from the viewpoint of the principle of maximum expected accuracy (MEA). It is also shown that some important algorithms in bioinformatics can be interpreted in a unified manner. Not only the concept presented in this paper gives a useful framework to design MEA-based estimators but also it is highly extendable and sheds new light on many problems in bioinformatics. PMID:21365017
USDA-ARS?s Scientific Manuscript database
A computer algorithm was created to inspect scanned images from DNA microarray slides developed to rapidly detect and genotype E. Coli O157 virulent strains. The algorithm computes centroid locations for signal and background pixels in RGB space and defines a plane perpendicular to the line connect...
NASA Astrophysics Data System (ADS)
Lestari, D.; Raharjo, D.; Bustamam, A.; Abdillah, B.; Widhianto, W.
2017-07-01
Dengue virus consists of 10 different constituent proteins and are classified into 4 major serotypes (DEN 1 - DEN 4). This study was designed to perform clustering against 30 protein sequences of dengue virus taken from Virus Pathogen Database and Analysis Resource (VIPR) using Regularized Markov Clustering (R-MCL) algorithm and then we analyze the result. By using Python program 3.4, R-MCL algorithm produces 8 clusters with more than one centroid in several clusters. The number of centroid shows the density level of interaction. Protein interactions that are connected in a tissue, form a complex protein that serves as a specific biological process unit. The analysis of result shows the R-MCL clustering produces clusters of dengue virus family based on the similarity role of their constituent protein, regardless of serotypes.
Accuracy of Shack-Hartmann wavefront sensor using a coherent wound fibre image bundle
NASA Astrophysics Data System (ADS)
Zheng, Jessica R.; Goodwin, Michael; Lawrence, Jon
2018-03-01
Shack-Hartmannwavefront sensors using wound fibre image bundles are desired for multi-object adaptive optical systems to provide large multiplex positioned by Starbugs. The use of a large-sized wound fibre image bundle provides the flexibility to use more sub-apertures wavefront sensor for ELTs. These compact wavefront sensors take advantage of large focal surfaces such as the Giant Magellan Telescope. The focus of this paper is to study the wound fibre image bundle structure defects effect on the centroid measurement accuracy of a Shack-Hartmann wavefront sensor. We use the first moment centroid method to estimate the centroid of a focused Gaussian beam sampled by a simulated bundle. Spot estimation accuracy with wound fibre image bundle and its structure impact on wavefront measurement accuracy statistics are addressed. Our results show that when the measurement signal-to-noise ratio is high, the centroid measurement accuracy is dominated by the wound fibre image bundle structure, e.g. tile angle and gap spacing. For the measurement with low signal-to-noise ratio, its accuracy is influenced by the read noise of the detector instead of the wound fibre image bundle structure defects. We demonstrate this both with simulation and experimentally. We provide a statistical model of the centroid and wavefront error of a wound fibre image bundle found through experiment.
Huang, Qiongyu; Sauer, John R.; Swatantran, Anu; Dubayah, Ralph
2016-01-01
Drastic shifts in species distributions are a cause of concern for ecologists. Such shifts pose great threat to biodiversity especially under unprecedented anthropogenic and natural disturbances. Many studies have documented recent shifts in species distributions. However, most of these studies are limited to regional scales, and do not consider the abundance structure within species ranges. Developing methods to detect systematic changes in species distributions over their full ranges is critical for understanding the impact of changing environments and for successful conservation planning. Here, we demonstrate a centroid model for range-wide analysis of distribution shifts using the North American Breeding Bird Survey. The centroid model is based on a hierarchical Bayesian framework which models population change within physiographic strata while accounting for several factors affecting species detectability. Yearly abundance-weighted range centroids are estimated. As case studies, we derive annual centroids for the Carolina wren and house finch in their ranges in the U.S. We further evaluate the first-difference correlation between species’ centroid movement and changes in winter severity, total population abundance. We also examined associations of change in centroids from sub-ranges. Change in full-range centroid movements of Carolina wren significantly correlate with snow cover days (r = −0.58). For both species, the full-range centroid shifts also have strong correlation with total abundance (r = 0.65, and 0.51 respectively). The movements of the full-range centroids of the two species are correlated strongly (up to r = 0.76) with that of the sub-ranges with more drastic population changes. Our study demonstrates the usefulness of centroids for analyzing distribution changes in a two-dimensional spatial context. Particularly it highlights applications that associate the centroid with factors such as environmental stressors, population characteristics, and progression of invasive species. Routine monitoring of changes in centroid will provide useful insights into long-term avian responses to environmental changes.
NASA Astrophysics Data System (ADS)
Ly, Canh
2004-08-01
Scan-MUSIC algorithm, developed by the U.S. Army Research Laboratory (ARL), improves angular resolution for target detection with the use of a single rotatable radar scanning the angular region of interest. This algorithm has been adapted and extended from the MUSIC algorithm that has been used for a linear sensor array. Previously, it was shown that the SMUSIC algorithm and a Millimeter Wave radar can be used to resolve two closely spaced point targets that exhibited constructive interference, but not for the targets that exhibited destructive interference. Therefore, there were some limitations of the algorithm for the point targets. In this paper, the SMUSIC algorithm is applied to a problem of resolving real complex scatterer-type targets, which is more useful and of greater practical interest, particular for the future Army radar system. The paper presents results of the angular resolution of the targets, an M60 tank and an M113 Armored Personnel Carrier (APC), that are within the mainlobe of a Κα-band radar antenna. In particular, we applied the algorithm to resolve centroids of the targets that were placed within the beamwidth of the antenna. The collected coherent data using the stepped-frequency radar were compute magnitudely for the SMUSIC calculation. Even though there were significantly different signal returns for different orientations and offsets of the two targets, we resolved those two target centroids when they were as close as about 1/3 of the antenna beamwidth.
Improving experimental phases for strong reflections prior to density modification
DOE Office of Scientific and Technical Information (OSTI.GOV)
Uervirojnangkoorn, Monarin; University of Lübeck, Ratzeburger Allee 160, 23538 Lübeck; Hilgenfeld, Rolf, E-mail: hilgenfeld@biochem.uni-luebeck.de
A genetic algorithm has been developed to optimize the phases of the strongest reflections in SIR/SAD data. This is shown to facilitate density modification and model building in several test cases. Experimental phasing of diffraction data from macromolecular crystals involves deriving phase probability distributions. These distributions are often bimodal, making their weighted average, the centroid phase, improbable, so that electron-density maps computed using centroid phases are often non-interpretable. Density modification brings in information about the characteristics of electron density in protein crystals. In successful cases, this allows a choice between the modes in the phase probability distributions, and the mapsmore » can cross the borderline between non-interpretable and interpretable. Based on the suggestions by Vekhter [Vekhter (2005 ▶), Acta Cryst. D61, 899–902], the impact of identifying optimized phases for a small number of strong reflections prior to the density-modification process was investigated while using the centroid phase as a starting point for the remaining reflections. A genetic algorithm was developed that optimizes the quality of such phases using the skewness of the density map as a target function. Phases optimized in this way are then used in density modification. In most of the tests, the resulting maps were of higher quality than maps generated from the original centroid phases. In one of the test cases, the new method sufficiently improved a marginal set of experimental SAD phases to enable successful map interpretation. A computer program, SISA, has been developed to apply this method for phase improvement in macromolecular crystallography.« less
Large Footprint LiDAR Data Processing for Ground Detection and Biomass Estimation
NASA Astrophysics Data System (ADS)
Zhuang, Wei
Ground detection in large footprint waveform Light Detection And Ranging (LiDAR) data is important in calculating and estimating downstream products, especially in forestry applications. For example, tree heights are calculated as the difference between the ground peak and first returned signal in a waveform. Forest attributes, such as aboveground biomass, are estimated based on the tree heights. This dissertation investigated new metrics and algorithms for estimating aboveground biomass and extracting ground peak location in large footprint waveform LiDAR data. In the first manuscript, an accurate and computationally efficient algorithm, named Filtering and Clustering Algorithm (FICA), was developed based on a set of multiscale second derivative filters for automatically detecting the ground peak in an waveform from Land, Vegetation and Ice Sensor. Compared to existing ground peak identification algorithms, FICA was tested in different land cover type plots and showed improved accuracy in ground detections of the vegetation plots and similar accuracy in developed area plots. Also, FICA adopted a peak identification strategy rather than following a curve-fitting process, and therefore, exhibited improved efficiency. In the second manuscript, an algorithm was developed specifically for shrub waveforms. The algorithm only partially fitted the shrub canopy reflection and detected the ground peak by investigating the residual signal, which was generated by deducting a Gaussian fitting function from the raw waveform. After the deduction, the overlapping ground peak was identified as the local maximum of the residual signal. In addition, an applicability model was built for determining waveforms where the proposed PCF algorithm should be applied. In the third manuscript, a new set of metrics was developed to increase accuracy in biomass estimation models. The metrics were based on the results of Gaussian decomposition. They incorporated both waveform intensity represented by the area covered by a Gaussian function and its associated heights, which was the centroid of the Gaussian function. By considering signal reflection of different vegetation layers, the developed metrics obtained better estimation accuracy in aboveground biomass when compared to existing metrics. In addition, the new developed metrics showed strong correlation with other forest structural attributes, such as mean Diameter at Breast Height (DBH) and stem density. In sum, the dissertation investigated the various techniques for large footprint waveform LiDAR processing for detecting the ground peak and estimating biomass. The novel techniques developed in this dissertation showed better performance than existing methods or metrics.
Convalescing Cluster Configuration Using a Superlative Framework
Sabitha, R.; Karthik, S.
2015-01-01
Competent data mining methods are vital to discover knowledge from databases which are built as a result of enormous growth of data. Various techniques of data mining are applied to obtain knowledge from these databases. Data clustering is one such descriptive data mining technique which guides in partitioning data objects into disjoint segments. K-means algorithm is a versatile algorithm among the various approaches used in data clustering. The algorithm and its diverse adaptation methods suffer certain problems in their performance. To overcome these issues a superlative algorithm has been proposed in this paper to perform data clustering. The specific feature of the proposed algorithm is discretizing the dataset, thereby improving the accuracy of clustering, and also adopting the binary search initialization method to generate cluster centroids. The generated centroids are fed as input to K-means approach which iteratively segments the data objects into respective clusters. The clustered results are measured for accuracy and validity. Experiments conducted by testing the approach on datasets from the UC Irvine Machine Learning Repository evidently show that the accuracy and validity measure is higher than the other two approaches, namely, simple K-means and Binary Search method. Thus, the proposed approach proves that discretization process will improve the efficacy of descriptive data mining tasks. PMID:26543895
Adaptive fuzzy system for 3-D vision
NASA Technical Reports Server (NTRS)
Mitra, Sunanda
1993-01-01
An adaptive fuzzy system using the concept of the Adaptive Resonance Theory (ART) type neural network architecture and incorporating fuzzy c-means (FCM) system equations for reclassification of cluster centers was developed. The Adaptive Fuzzy Leader Clustering (AFLC) architecture is a hybrid neural-fuzzy system which learns on-line in a stable and efficient manner. The system uses a control structure similar to that found in the Adaptive Resonance Theory (ART-1) network to identify the cluster centers initially. The initial classification of an input takes place in a two stage process; a simple competitive stage and a distance metric comparison stage. The cluster prototypes are then incrementally updated by relocating the centroid positions from Fuzzy c-Means (FCM) system equations for the centroids and the membership values. The operational characteristics of AFLC and the critical parameters involved in its operation are discussed. The performance of the AFLC algorithm is presented through application of the algorithm to the Anderson Iris data, and laser-luminescent fingerprint image data. The AFLC algorithm successfully classifies features extracted from real data, discrete or continuous, indicating the potential strength of this new clustering algorithm in analyzing complex data sets. The hybrid neuro-fuzzy AFLC algorithm will enhance analysis of a number of difficult recognition and control problems involved with Tethered Satellite Systems and on-orbit space shuttle attitude controller.
Kernel-based discriminant feature extraction using a representative dataset
NASA Astrophysics Data System (ADS)
Li, Honglin; Sancho Gomez, Jose-Luis; Ahalt, Stanley C.
2002-07-01
Discriminant Feature Extraction (DFE) is widely recognized as an important pre-processing step in classification applications. Most DFE algorithms are linear and thus can only explore the linear discriminant information among the different classes. Recently, there has been several promising attempts to develop nonlinear DFE algorithms, among which is Kernel-based Feature Extraction (KFE). The efficacy of KFE has been experimentally verified by both synthetic data and real problems. However, KFE has some known limitations. First, KFE does not work well for strongly overlapped data. Second, KFE employs all of the training set samples during the feature extraction phase, which can result in significant computation when applied to very large datasets. Finally, KFE can result in overfitting. In this paper, we propose a substantial improvement to KFE that overcomes the above limitations by using a representative dataset, which consists of critical points that are generated from data-editing techniques and centroid points that are determined by using the Frequency Sensitive Competitive Learning (FSCL) algorithm. Experiments show that this new KFE algorithm performs well on significantly overlapped datasets, and it also reduces computational complexity. Further, by controlling the number of centroids, the overfitting problem can be effectively alleviated.
The research of adaptive-exposure on spot-detecting camera in ATP system
NASA Astrophysics Data System (ADS)
Qian, Feng; Jia, Jian-jun; Zhang, Liang; Wang, Jian-Yu
2013-08-01
High precision acquisition, tracking, pointing (ATP) system is one of the key techniques of laser communication. The spot-detecting camera is used to detect the direction of beacon in laser communication link, so that it can get the position information of communication terminal for ATP system. The positioning accuracy of camera decides the capability of laser communication system directly. So the spot-detecting camera in satellite-to-earth laser communication ATP systems needs high precision on target detection. The positioning accuracy of cameras should be better than +/-1μ rad . The spot-detecting cameras usually adopt centroid algorithm to get the position information of light spot on detectors. When the intensity of beacon is moderate, calculation results of centroid algorithm will be precise. But the intensity of beacon changes greatly during communication for distance, atmospheric scintillation, weather etc. The output signal of detector will be insufficient when the camera underexposes to beacon because of low light intensity. On the other hand, the output signal of detector will be saturated when the camera overexposes to beacon because of high light intensity. The calculation accuracy of centroid algorithm becomes worse if the spot-detecting camera underexposes or overexposes, and then the positioning accuracy of camera will be reduced obviously. In order to improve the accuracy, space-based cameras should regulate exposure time in real time according to light intensity. The algorithm of adaptive-exposure technique for spot-detecting camera based on metal-oxide-semiconductor (CMOS) detector is analyzed. According to analytic results, a CMOS camera in space-based laser communication system is described, which utilizes the algorithm of adaptive-exposure to adapting exposure time. Test results from imaging experiment system formed verify the design. Experimental results prove that this design can restrain the reduction of positioning accuracy for the change of light intensity. So the camera can keep stable and high positioning accuracy during communication.
JASMINE project Instrument design and centroiding experiment
NASA Astrophysics Data System (ADS)
Yano, Taihei; Gouda, Naoteru; Kobayashi, Yukiyasu; Yamada, Yoshiyuki
JASMINE will study the fundamental structure and evolution of the Milky Way Galaxy. To accomplish these objectives, JASMINE will measure trigonometric parallaxes, positions and proper motions of about 10 million stars with a precision of 10 μarcsec at z = 14 mag. In this paper the instrument design (optics, detectors, etc.) of JASMINE is presented. We also show a CCD centroiding experiment for estimating positions of star images. The experimental result shows that the accuracy of estimated distances has a variance of less than 0.01 pixel.
Effects of 3D Earth structure on W-phase CMT parameters
NASA Astrophysics Data System (ADS)
Morales, Catalina; Duputel, Zacharie; Rivera, Luis; Kanamori, Hiroo
2017-04-01
The source inversion of the W-phase has demonstrated a great potential to provide fast and reliable estimates of the centroid moment tensor (CMT) for moderate to large earthquakes. It has since been implemented in different operational environments (NEIC-USGS, PTWC, etc.) with the aim of providing rapid CMT solutions. These solutions are in particular useful for tsunami warning purposes. Computationally, W-phase waveforms are usually synthetized by summation of normal modes at long period (100 - 1000 s) for a spherical Earth model (e.g., PREM). Although the energy of these modes mainly stays in the mantle where lateral structural variations are relatively small, the impact of 3D heterogeneities on W-phase solutions have not yet been quantified. In this study, we investigate possible bias in W-phase source parameters due to unmodeled lateral structural heterogeneities. We generate a simulated dataset consisting of synthetic seismograms of large past earthquakes that accounts for the Earth's 3D structure. The W-phase algorithm is then used to invert the synthetic dataset for earthquake CMT parameters with and without added noise. Results show that the impact of 3D heterogeneities is generally larger for surface-waves than for W-phase waveforms. However, some discrepancies are noted between inverted W-phase parameters and target values. Particular attention is paid to the possible bias induced by the unmodeled 3D structure into the location of the W-phase centroid. Preliminary results indicate that the parameter that is most susceptible to 3D Earth structure seems to be the centroid depth.
Dynamic imaging model and parameter optimization for a star tracker.
Yan, Jinyun; Jiang, Jie; Zhang, Guangjun
2016-03-21
Under dynamic conditions, star spots move across the image plane of a star tracker and form a smeared star image. This smearing effect increases errors in star position estimation and degrades attitude accuracy. First, an analytical energy distribution model of a smeared star spot is established based on a line segment spread function because the dynamic imaging process of a star tracker is equivalent to the static imaging process of linear light sources. The proposed model, which has a clear physical meaning, explicitly reflects the key parameters of the imaging process, including incident flux, exposure time, velocity of a star spot in an image plane, and Gaussian radius. Furthermore, an analytical expression of the centroiding error of the smeared star spot is derived using the proposed model. An accurate and comprehensive evaluation of centroiding accuracy is obtained based on the expression. Moreover, analytical solutions of the optimal parameters are derived to achieve the best performance in centroid estimation. Finally, we perform numerical simulations and a night sky experiment to validate the correctness of the dynamic imaging model, the centroiding error expression, and the optimal parameters.
Computing the apparent centroid of radar targets
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, C.E.
1996-12-31
A high-frequency multibounce radar scattering code was used as a simulation platform for demonstrating an algorithm to compute the ARC of specific radar targets. To illustrate this simulation process, several targets models were used. Simulation results for a sphere model were used to determine the errors of approximation associated with the simulation; verifying the process. The severity of glint induced tracking errors was also illustrated using a model of an F-15 aircraft. It was shown, in a deterministic manner, that the ARC of a target can fall well outside its physical extent. Finally, the apparent radar centroid simulation based onmore » a ray casting procedure is well suited for use on most massively parallel computing platforms and could lead to the development of a near real-time radar tracking simulation for applications such as endgame fuzing, survivability, and vulnerability analyses using specific radar targets and fuze algorithms.« less
An Element-Based Concurrent Partitioner for Unstructured Finite Element Meshes
NASA Technical Reports Server (NTRS)
Ding, Hong Q.; Ferraro, Robert D.
1996-01-01
A concurrent partitioner for partitioning unstructured finite element meshes on distributed memory architectures is developed. The partitioner uses an element-based partitioning strategy. Its main advantage over the more conventional node-based partitioning strategy is its modular programming approach to the development of parallel applications. The partitioner first partitions element centroids using a recursive inertial bisection algorithm. Elements and nodes then migrate according to the partitioned centroids, using a data request communication template for unpredictable incoming messages. Our scalable implementation is contrasted to a non-scalable implementation which is a straightforward parallelization of a sequential partitioner.
Closed geometric models in medical applications
NASA Astrophysics Data System (ADS)
Jagannathan, Lakshmipathy; Nowinski, Wieslaw L.; Raphel, Jose K.; Nguyen, Bonnie T.
1996-04-01
Conventional surface fitting methods give twisted surfaces and complicates capping closures. This is a typical character of surfaces that lack rectangular topology. We suggest an algorithm which overcomes these limitations. The analysis of the algorithm is presented with experimental results. This algorithm assumes the mass center lying inside the object. Both capping closure and twisting are results of inadequate information on the geometric proximity of the points and surfaces which are proximal in the parametric space. Geometric proximity at the contour level is handled by mapping the points along the contour onto a hyper-spherical space. The resulting angular gradation with respect to the centroid is monotonic and hence avoids the twisting problem. Inter-contour geometric proximity is achieved by partitioning the point set based on the angle it makes with the respective centroids. Avoidance of capping complications is achieved by generating closed cross curves connecting curves which are reflections about the abscissa. The method is of immense use for the generation of the deep cerebral structures and is applied to the deep structures generated from the Schaltenbrand- Wahren brain atlas.
Application of k-means clustering algorithm in grouping the DNA sequences of hepatitis B virus (HBV)
NASA Astrophysics Data System (ADS)
Bustamam, A.; Tasman, H.; Yuniarti, N.; Frisca, Mursidah, I.
2017-07-01
Based on WHO data, an estimated of 15 millions people worldwide who are infected with hepatitis B (HBsAg+), which is caused by HBV virus, are also infected by hepatitis D, which is caused by HDV virus. Hepatitis D infection can occur simultaneously with hepatitis B (co infection) or after a person is exposed to chronic hepatitis B (super infection). Since HDV cannot live without HBV, HDV infection is closely related to HBV infection, hence it is very realistic that every effort of prevention against hepatitis B can indirectly prevent hepatitis D. This paper presents clustering of HBV DNA sequences by using k-means clustering algorithm and R programming. Clustering processes are started with collecting HBV DNA sequences from GenBank, then performing extraction HBV DNA sequences using n-mers frequency and furthermore the extraction results are collected as a matrix and normalized using the min-max normalization with interval [0, 1] which will later be used as an input data. The number of clusters is two and the initial centroid selected of the cluster is chosen randomly. In each iteration, the distance of every object to each centroid are calculated using the Euclidean distance and the minimum distance is selected to determine the membership in a cluster until two convergent clusters are created. As the result, the HBV viruses in the first cluster is more virulent than the HBV viruses in the second cluster, so the HBV viruses in the first cluster can potentially evolve with HDV viruses that cause hepatitis D.
Centroiding Experiment for Determining the Positions of Stars with High Precision
NASA Astrophysics Data System (ADS)
Yano, T.; Araki, H.; Hanada, H.; Tazawa, S.; Gouda, N.; Kobayashi, Y.; Yamada, Y.; Niwa, Y.
2010-12-01
We have experimented with the determination of the positions of star images on a detector with high precision such as 10 microarcseconds, required by a space astrometry satellite, JASMINE. In order to accomplish such a precision, we take the following two procedures. (1) We determine the positions of star images on the detector with the precision of about 0.01 pixel for one measurement, using an algorithm for estimating them from photon weighted means of the star images. (2) We determine the positions of star images with the precision of about 0.0001-0.00001 pixel, which corresponds to that of 10 microarcseconds, using a large amount of data over 10000 measurements, that is, the error of the positions decreases according to the amount of data. Here, we note that the procedure 2 is not accomplished when the systematic error in our data is not excluded adequately even if we use a large amount of data. We first show the method to determine the positions of star images on the detector using photon weighted means of star images. This algorithm, used in this experiment, is very useful because it is easy to calculate the photon weighted mean from the data. This is very important in treating a large amount of data. Furthermore, we need not assume the shape of the point spread function in deriving the centroid of star images. Second, we show the results in the laboratory experiment for precision of determining the positions of star images. We obtain that the precision of estimation of positions of star images on the detector is under a variance of 0.01 pixel for one measurement (procedure 1). We also obtain that the precision of the positions of star images becomes a variance of about 0.0001 pixel using about 10000 measurements (procedure 2).
Negri, Lucas; Nied, Ademir; Kalinowski, Hypolito; Paterno, Aleksander
2011-01-01
This paper presents a benchmark for peak detection algorithms employed in fiber Bragg grating spectrometric interrogation systems. The accuracy, precision, and computational performance of currently used algorithms and those of a new proposed artificial neural network algorithm are compared. Centroid and gaussian fitting algorithms are shown to have the highest precision but produce systematic errors that depend on the FBG refractive index modulation profile. The proposed neural network displays relatively good precision with reduced systematic errors and improved computational performance when compared to other networks. Additionally, suitable algorithms may be chosen with the general guidelines presented. PMID:22163806
Fast and Adaptive Auto-focusing Microscope
NASA Astrophysics Data System (ADS)
Obara, Takeshi; Igarashi, Yasunobu; Hashimoto, Koichi
Optical microscopes are widely used in biological and medical researches. By using the microscope, we can observe cellular movements including intracellular ions and molecules tagged with fluorescent dyes at a high magnification. However, a freely motile cell easily escapes from a 3D field of view of the typical microscope. Therefore, we propose a novel auto-focusing algorithm and develop a auto-focusing and tracking microscope. XYZ positions of a microscopic stage are feedback controlled to focus and track the cell automatically. A bright-field image is used to estimate a cellular position. XY centroids are used to estimate XY positions of the tracked cell. To estimate Z position, we use a diffraction pattern around the cell membrane. This estimation method is so-called Depth from Diffraction (DFDi). However, this method is not robust for individual differences between cells because the diffraction pattern depends on each cellular shape. Therefore, in this study, we propose a real-time correction of DFDi by using 2D Laplacian of an intracellular area as a goodness of the focus. To evaluate the performance of our developed algorithm and microscope, we auto-focus and track a freely moving paramecium. In this experimental result, the paramecium is auto-focused and kept inside the scope of the microscope during 45s. The evaluated focal error is within 5µm, while a length and a thickness of the paramecium are about 200µm and 50µm, respectively.
Peak-locking centroid bias in Shack-Hartmann wavefront sensing
NASA Astrophysics Data System (ADS)
Anugu, Narsireddy; Garcia, Paulo J. V.; Correia, Carlos M.
2018-05-01
Shack-Hartmann wavefront sensing relies on accurate spot centre measurement. Several algorithms were developed with this aim, mostly focused on precision, i.e. minimizing random errors. In the solar and extended scene community, the importance of the accuracy (bias error due to peak-locking, quantization, or sampling) of the centroid determination was identified and solutions proposed. But these solutions only allow partial bias corrections. To date, no systematic study of the bias error was conducted. This article bridges the gap by quantifying the bias error for different correlation peak-finding algorithms and types of sub-aperture images and by proposing a practical solution to minimize its effects. Four classes of sub-aperture images (point source, elongated laser guide star, crowded field, and solar extended scene) together with five types of peak-finding algorithms (1D parabola, the centre of gravity, Gaussian, 2D quadratic polynomial, and pyramid) are considered, in a variety of signal-to-noise conditions. The best performing peak-finding algorithm depends on the sub-aperture image type, but none is satisfactory to both bias and random errors. A practical solution is proposed that relies on the antisymmetric response of the bias to the sub-pixel position of the true centre. The solution decreases the bias by a factor of ˜7 to values of ≲ 0.02 pix. The computational cost is typically twice of current cross-correlation algorithms.
Deep neural network-based domain adaptation for classification of remote sensing images
NASA Astrophysics Data System (ADS)
Ma, Li; Song, Jiazhen
2017-10-01
We investigate the effectiveness of deep neural network for cross-domain classification of remote sensing images in this paper. In the network, class centroid alignment is utilized as a domain adaptation strategy, making the network able to transfer knowledge from the source domain to target domain on a per-class basis. Since predicted labels of target data should be used to estimate the centroid of each class, we use overall centroid alignment as a coarse domain adaptation method to improve the estimation accuracy. In addition, rectified linear unit is used as the activation function to produce sparse features, which may improve the separation capability. The proposed network can provide both aligned features and an adaptive classifier, as well as obtain label-free classification of target domain data. The experimental results using Hyperion, NCALM, and WorldView-2 remote sensing images demonstrated the effectiveness of the proposed approach.
NASA Astrophysics Data System (ADS)
Bansal, A. R.; Anand, S. P.; Rajaram, Mita; Rao, V. K.; Dimri, V. P.
2013-09-01
The depth to the bottom of the magnetic sources (DBMS) has been estimated from the aeromagnetic data of Central India. The conventional centroid method of DBMS estimation assumes random uniform uncorrelated distribution of sources and to overcome this limitation a modified centroid method based on scaling distribution has been proposed. Shallower values of the DBMS are found for the south western region. The DBMS values are found as low as 22 km in the south west Deccan trap covered regions and as deep as 43 km in the Chhattisgarh Basin. In most of the places DBMS are much shallower than the Moho depth, earlier found from the seismic study and may be representing the thermal/compositional/petrological boundaries. The large variation in the DBMS indicates the complex nature of the Indian crust.
Fusing Image Data for Calculating Position of an Object
NASA Technical Reports Server (NTRS)
Huntsberger, Terrance; Cheng, Yang; Liebersbach, Robert; Trebi-Ollenu, Ashitey
2007-01-01
A computer program has been written for use in maintaining the calibration, with respect to the positions of imaged objects, of a stereoscopic pair of cameras on each of the Mars Explorer Rovers Spirit and Opportunity. The program identifies and locates a known object in the images. The object in question is part of a Moessbauer spectrometer located at the tip of a robot arm, the kinematics of which are known. In the program, the images are processed through a module that extracts edges, combines the edges into line segments, and then derives ellipse centroids from the line segments. The images are also processed by a feature-extraction algorithm that performs a wavelet analysis, then performs a pattern-recognition operation in the wavelet-coefficient space to determine matches to a texture feature measure derived from the horizontal, vertical, and diagonal coefficients. The centroids from the ellipse finder and the wavelet feature matcher are then fused to determine co-location. In the event that a match is found, the centroid (or centroids if multiple matches are present) is reported. If no match is found, the process reports the results of the analyses for further examination by human experts.
Automatic detection and quantitative analysis of cells in the mouse primary motor cortex
NASA Astrophysics Data System (ADS)
Meng, Yunlong; He, Yong; Wu, Jingpeng; Chen, Shangbin; Li, Anan; Gong, Hui
2014-09-01
Neuronal cells play very important role on metabolism regulation and mechanism control, so cell number is a fundamental determinant of brain function. Combined suitable cell-labeling approaches with recently proposed three-dimensional optical imaging techniques, whole mouse brain coronal sections can be acquired with 1-μm voxel resolution. We have developed a completely automatic pipeline to perform cell centroids detection, and provided three-dimensional quantitative information of cells in the primary motor cortex of C57BL/6 mouse. It involves four principal steps: i) preprocessing; ii) image binarization; iii) cell centroids extraction and contour segmentation; iv) laminar density estimation. Investigations on the presented method reveal promising detection accuracy in terms of recall and precision, with average recall rate 92.1% and average precision rate 86.2%. We also analyze laminar density distribution of cells from pial surface to corpus callosum from the output vectorizations of detected cell centroids in mouse primary motor cortex, and find significant cellular density distribution variations in different layers. This automatic cell centroids detection approach will be beneficial for fast cell-counting and accurate density estimation, as time-consuming and error-prone manual identification is avoided.
NASA Technical Reports Server (NTRS)
Joiner, J.; Vasilkov, A.; Gupta, P.; Bhartia, P. K.; Veefkind, P.; Sneep, M.; de Haan, J.; Polonsky, I.; Spurr, R.
2012-01-01
The cloud Optical Centroid Pressure (OCP), also known as the effective cloud pressure, is a satellite-derived parameter that is commonly used in trace-gas retrievals to account for the effects of clouds on near-infrared through ultraviolet radiance measurements. Fast simulators are desirable to further expand the use of cloud OCP retrievals into the operational and climate communities for applications such as data assimilation and evaluation of cloud vertical structure in general circulation models. In this paper, we develop and validate fast simulators that provide estimates of the cloud OCP given a vertical profile of optical extinction. We use a pressure-weighting scheme where the weights depend upon optical parameters of clouds and/or aerosol. A cloud weighting function is easily extracted using this formulation. We then use fast simulators to compare two different satellite cloud OCP retrievals from the Ozone Monitoring Instrument (OMI) with estimates based on collocated cloud extinction profiles from a combination of CloudS at radar and MODIS visible radiance data. These comparisons are made over a wide range of conditions to provide a comprehensive validation of the OMI cloud OCP retrievals. We find generally good agreement between OMI cloud OCPs and those predicted by CloudSat. However, the OMI cloud OCPs from the two independent algorithms agree better with each other than either does with the estimates from CloudSat/MODIS. Differences between OMI cloud OCPs and those based on CloudSat/MODIS may result from undetected snow/ice at the surface, cloud 3-D effects, low altitude clouds missed by CloudSat, and the fact that CloudSat only observes a relatively small fraction of an OMI field-of-view.
NASA Technical Reports Server (NTRS)
Joiner, J.; Vasilkov, A. P.; Gupta, Pawan; Bhartia, P. K.; Veefkind, Pepijn; Sneep, Maarten; deHaan, Johan; Polonsky, Igor; Spurr, Robert
2011-01-01
We have developed a relatively simple scheme for simulating retrieved cloud optical centroid pressures (OCP) from satellite solar backscatter observations. We have compared simulator results with those from more detailed retrieval simulators that more fully account for the complex radiative transfer in a cloudy atmosphere. We used this fast simulator to conduct a comprehensive evaluation of cloud OCPs from the two OMI algorithms using collocated data from CloudSat and Aqua MODIS, a unique situation afforded by the A-train formation of satellites. We find that both OMI algorithms perform reasonably well and that the two algorithms agree better with each other than either does with the collocated CloudSat data. This indicates that patchy snow/ice, cloud 3D, and aerosol effects not simulated with the CloudSat data are affecting both algorithms similarly. We note that the collocation with CloudSat occurs mainly on the East side of OMI's swath. Therefore, we are not able to address cross-track biases in OMI cloud OCP retrievals. Our fast simulator may also be used to simulate cloud OCP from output generated by general circulation models (GCM) with appropriate account of cloud overlap. We have implemented such a scheme and plan to compare OMI data with GCM output in the near future.
A robust close-range photogrammetric target extraction algorithm for size and type variant targets
NASA Astrophysics Data System (ADS)
Nyarko, Kofi; Thomas, Clayton; Torres, Gilbert
2016-05-01
The Photo-G program conducted by Naval Air Systems Command at the Atlantic Test Range in Patuxent River, Maryland, uses photogrammetric analysis of large amounts of real-world imagery to characterize the motion of objects in a 3-D scene. Current approaches involve several independent processes including target acquisition, target identification, 2-D tracking of image features, and 3-D kinematic state estimation. Each process has its own inherent complications and corresponding degrees of both human intervention and computational complexity. One approach being explored for automated target acquisition relies on exploiting the pixel intensity distributions of photogrammetric targets, which tend to be patterns with bimodal intensity distributions. The bimodal distribution partitioning algorithm utilizes this distribution to automatically deconstruct a video frame into regions of interest (ROI) that are merged and expanded to target boundaries, from which ROI centroids are extracted to mark target acquisition points. This process has proved to be scale, position and orientation invariant, as well as fairly insensitive to global uniform intensity disparities.
Correlation Techniques as Applied to Pose Estimation in Space Station Docking
NASA Technical Reports Server (NTRS)
Rollins, J. Michael; Juday, Richard D.; Monroe, Stanley E., Jr.
2002-01-01
The telerobotic assembly of space-station components has become the method of choice for the International Space Station (ISS) because it offers a safe alternative to the more hazardous option of space walks. The disadvantage of telerobotic assembly is that it does not provide for direct arbitrary views of mating interfaces for the teleoperator. Unless cameras are present very close to the interface positions, such views must be generated graphically, based on calculated pose relationships derived from images. To assist in this photogrammetric pose estimation, circular targets, or spots, of high contrast have been affixed on each connecting module at carefully surveyed positions. The appearance of a subset of spots essentially must form a constellation of specific relative positions in the incoming digital image stream in order for the docking to proceed. Spot positions are expressed in terms of their apparent centroids in an image. The precision of centroid estimation is required to be as fine as 1I20th pixel, in some cases. This paper presents an approach to spot centroid estimation using cross correlation between spot images and synthetic spot models of precise centration. Techniques for obtaining sub-pixel accuracy and for shadow, obscuration and lighting irregularity compensation are discussed.
NASA Astrophysics Data System (ADS)
Ward, W. O. C.; Wilkinson, P. B.; Chambers, J. E.; Oxby, L. S.; Bai, L.
2014-04-01
A novel method for the effective identification of bedrock subsurface elevation from electrical resistivity tomography images is described. Identifying subsurface boundaries in the topographic data can be difficult due to smoothness constraints used in inversion, so a statistical population-based approach is used that extends previous work in calculating isoresistivity surfaces. The analysis framework involves a procedure for guiding a clustering approach based on the fuzzy c-means algorithm. An approximation of resistivity distributions, found using kernel density estimation, was utilized as a means of guiding the cluster centroids used to classify data. A fuzzy method was chosen over hard clustering due to uncertainty in hard edges in the topography data, and a measure of clustering uncertainty was identified based on the reciprocal of cluster membership. The algorithm was validated using a direct comparison of known observed bedrock depths at two 3-D survey sites, using real-time GPS information of exposed bedrock by quarrying on one site, and borehole logs at the other. Results show similarly accurate detection as a leading isosurface estimation method, and the proposed algorithm requires significantly less user input and prior site knowledge. Furthermore, the method is effectively dimension-independent and will scale to data of increased spatial dimensions without a significant effect on the runtime. A discussion on the results by automated versus supervised analysis is also presented.
Improve threshold segmentation using features extraction to automatic lung delimitation.
França, Cleunio; Vasconcelos, Germano; Diniz, Paula; Melo, Pedro; Diniz, Jéssica; Novaes, Magdala
2013-01-01
With the consolidation of PACS and RIS systems, the development of algorithms for tissue segmentation and diseases detection have intensely evolved in recent years. These algorithms have advanced to improve its accuracy and specificity, however, there is still some way until these algorithms achieved satisfactory error rates and reduced processing time to be used in daily diagnosis. The objective of this study is to propose a algorithm for lung segmentation in x-ray computed tomography images using features extraction, as Centroid and orientation measures, to improve the basic threshold segmentation. As result we found a accuracy of 85.5%.
NASA Astrophysics Data System (ADS)
Asoka-Kumar, P.; Leung, T. C.; Lynn, K. G.; Nielsen, B.; Forcier, M. P.; Weinberg, Z. A.; Rubloff, G. W.
1992-06-01
The centroid shifts of positron annihilation spectra are reported from the depletion regions of metal-oxide-semiconductor (MOS) capacitors at room temperature and at 35 K. The centroid shift measurement can be explained using the variation of the electric field strength and depletion layer thickness as a function of the applied gate bias. An estimate for the relevant MOS quantities is obtained by fitting the centroid shift versus beam energy data with a steady-state diffusion-annihilation equation and a derivative-gaussian positron implantation profile. Inadequacy of the present analysis scheme is evident from the derived quantities and alternate methods are required for better predictions.
Recent Improvements to the Finite-Fault Rupture Detector Algorithm: FinDer II
NASA Astrophysics Data System (ADS)
Smith, D.; Boese, M.; Heaton, T. H.
2015-12-01
Constraining the finite-fault rupture extent and azimuth is crucial for accurately estimating ground-motion in large earthquakes. Detecting and modeling finite-fault ruptures in real-time is thus essential to both earthquake early warning (EEW) and rapid emergency response. Following extensive real-time and offline testing, the finite-fault rupture detector algorithm, FinDer (Böse et al., 2012 & 2015), was successfully integrated into the California-wide ShakeAlert EEW demonstration system. Since April 2015, FinDer has been scanning real-time waveform data from approximately 420 strong-motion stations in California for peak ground acceleration (PGA) patterns indicative of earthquakes. FinDer analyzes strong-motion data by comparing spatial images of observed PGA with theoretical templates modeled from empirical ground-motion prediction equations (GMPEs). If the correlation between the observed and theoretical PGA is sufficiently high, a report is sent to ShakeAlert including the estimated centroid position, length, and strike, and their uncertainties, of an ongoing fault rupture. Rupture estimates are continuously updated as new data arrives. As part of a joint effort between USGS Menlo Park, ETH Zurich, and Caltech, we have rewritten FinDer in C++ to obtain a faster and more flexible implementation. One new feature of FinDer II is that multiple contour lines of high-frequency PGA are computed and correlated with templates, allowing the detection of both large earthquakes and much smaller (~ M3.5) events shortly after their nucleation. Unlike previous EEW algorithms, FinDer II thus provides a modeling approach for both small-magnitude point-source and larger-magnitude finite-fault ruptures with consistent error estimates for the entire event magnitude range.
NASA Technical Reports Server (NTRS)
Wharton, S. W.
1980-01-01
An Interactive Cluster Analysis Procedure (ICAP) was developed to derive classifier training statistics from remotely sensed data. The algorithm interfaces the rapid numerical processing capacity of a computer with the human ability to integrate qualitative information. Control of the clustering process alternates between the algorithm, which creates new centroids and forms clusters and the analyst, who evaluate and elect to modify the cluster structure. Clusters can be deleted or lumped pairwise, or new centroids can be added. A summary of the cluster statistics can be requested to facilitate cluster manipulation. The ICAP was implemented in APL (A Programming Language), an interactive computer language. The flexibility of the algorithm was evaluated using data from different LANDSAT scenes to simulate two situations: one in which the analyst is assumed to have no prior knowledge about the data and wishes to have the clusters formed more or less automatically; and the other in which the analyst is assumed to have some knowledge about the data structure and wishes to use that information to closely supervise the clustering process. For comparison, an existing clustering method was also applied to the two data sets.
ICAP - An Interactive Cluster Analysis Procedure for analyzing remotely sensed data
NASA Technical Reports Server (NTRS)
Wharton, S. W.; Turner, B. J.
1981-01-01
An Interactive Cluster Analysis Procedure (ICAP) was developed to derive classifier training statistics from remotely sensed data. ICAP differs from conventional clustering algorithms by allowing the analyst to optimize the cluster configuration by inspection, rather than by manipulating process parameters. Control of the clustering process alternates between the algorithm, which creates new centroids and forms clusters, and the analyst, who can evaluate and elect to modify the cluster structure. Clusters can be deleted, or lumped together pairwise, or new centroids can be added. A summary of the cluster statistics can be requested to facilitate cluster manipulation. The principal advantage of this approach is that it allows prior information (when available) to be used directly in the analysis, since the analyst interacts with ICAP in a straightforward manner, using basic terms with which he is more likely to be familiar. Results from testing ICAP showed that an informed use of ICAP can improve classification, as compared to an existing cluster analysis procedure.
Experimental detection of optical vortices with a Shack-Hartmann wavefront sensor.
Murphy, Kevin; Burke, Daniel; Devaney, Nicholas; Dainty, Chris
2010-07-19
Laboratory experiments are carried out to detect optical vortices in conditions typical of those experienced when a laser beam is propagated through the atmosphere. A Spatial Light Modulator (SLM) is used to mimic atmospheric turbulence and a Shack-Hartmann wavefront sensor is utilised to measure the slopes of the wavefront surface. A matched filter algorithm determines the positions of the Shack-Hartmann spot centroids more robustly than a centroiding algorithm. The slope discrepancy is then obtained by taking the slopes measured by the wavefront sensor away from the slopes calculated from a least squares reconstruction of the phase. The slope discrepancy field is used as an input to the branch point potential method to find if a vortex is present, and if so to give its position and sign. The use of the slope discrepancy technique greatly improves the detection rate of the branch point potential method. This work shows the first time the branch point potential method has been used to detect optical vortices in an experimental setup.
Evidence against global attention filters selective for absolute bar-orientation in human vision.
Inverso, Matthew; Sun, Peng; Chubb, Charles; Wright, Charles E; Sperling, George
2016-01-01
The finding that an item of type A pops out from an array of distractors of type B typically is taken to support the inference that human vision contains a neural mechanism that is activated by items of type A but not by items of type B. Such a mechanism might be expected to yield a neural image in which items of type A produce high activation and items of type B low (or zero) activation. Access to such a neural image might further be expected to enable accurate estimation of the centroid of an ensemble of items of type A intermixed with to-be-ignored items of type B. Here, it is shown that as the number of items in stimulus displays is increased, performance in estimating the centroids of horizontal (vertical) items amid vertical (horizontal) distractors degrades much more quickly and dramatically than does performance in estimating the centroids of white (black) items among black (white) distractors. Together with previous findings, these results suggest that, although human vision does possess bottom-up neural mechanisms sensitive to abrupt local changes in bar-orientation, and although human vision does possess and utilize top-down global attention filters capable of selecting multiple items of one brightness or of one color from among others, it cannot use a top-down global attention filter capable of selecting multiple bars of a given absolute orientation and filtering bars of the opposite orientation in a centroid task.
Dimitriadis, Stavros I; López, María E; Bruña, Ricardo; Cuesta, Pablo; Marcos, Alberto; Maestú, Fernando; Pereda, Ernesto
2018-01-01
Our work aimed to demonstrate the combination of machine learning and graph theory for the designing of a connectomic biomarker for mild cognitive impairment (MCI) subjects using eyes-closed neuromagnetic recordings. The whole analysis based on source-reconstructed neuromagnetic activity. As ROI representation, we employed the principal component analysis (PCA) and centroid approaches. As representative bi-variate connectivity estimators for the estimation of intra and cross-frequency interactions, we adopted the phase locking value (PLV), the imaginary part (iPLV) and the correlation of the envelope (CorrEnv). Both intra and cross-frequency interactions (CFC) have been estimated with the three connectivity estimators within the seven frequency bands (intra-frequency) and in pairs (CFC), correspondingly. We demonstrated how different versions of functional connectivity graphs single-layer (SL-FCG) and multi-layer (ML-FCG) can give us a different view of the functional interactions across the brain areas. Finally, we applied machine learning techniques with main scope to build a reliable connectomic biomarker by analyzing both SL-FCG and ML-FCG in two different options: as a whole unit using a tensorial extraction algorithm and as single pair-wise coupling estimations. We concluded that edge-weighed feature selection strategy outperformed the tensorial treatment of SL-FCG and ML-FCG. The highest classification performance was obtained with the centroid ROI representation and edge-weighted analysis of the SL-FCG reaching the 98% for the CorrEnv in α 1 :α 2 and 94% for the iPLV in α 2 . Classification performance based on the multi-layer participation coefficient, a multiplexity index reached 52% for iPLV and 52% for CorrEnv. Selected functional connections that build the multivariate connectomic biomarker in the edge-weighted scenario are located in default-mode, fronto-parietal, and cingulo-opercular network. Our analysis supports the notion of analyzing FCG simultaneously in intra and cross-frequency whole brain interactions with various connectivity estimators in beamformed recordings.
NASA Astrophysics Data System (ADS)
Lindner, Robert; Lou, Xinghua; Reinstein, Jochen; Shoeman, Robert L.; Hamprecht, Fred A.; Winkler, Andreas
2014-06-01
Hydrogen-deuterium exchange (HDX) experiments analyzed by mass spectrometry (MS) provide information about the dynamics and the solvent accessibility of protein backbone amide hydrogen atoms. Continuous improvement of MS instrumentation has contributed to the increasing popularity of this method; however, comprehensive automated data analysis is only beginning to mature. We present Hexicon 2, an automated pipeline for data analysis and visualization based on the previously published program Hexicon (Lou et al. 2010). Hexicon 2 employs the sensitive NITPICK peak detection algorithm of its predecessor in a divide-and-conquer strategy and adds new features, such as chromatogram alignment and improved peptide sequence assignment. The unique feature of deuteration distribution estimation was retained in Hexicon 2 and improved using an iterative deconvolution algorithm that is robust even to noisy data. In addition, Hexicon 2 provides a data browser that facilitates quality control and provides convenient access to common data visualization tasks. Analysis of a benchmark dataset demonstrates superior performance of Hexicon 2 compared with its predecessor in terms of deuteration centroid recovery and deuteration distribution estimation. Hexicon 2 greatly reduces data analysis time compared with manual analysis, whereas the increased number of peptides provides redundant coverage of the entire protein sequence. Hexicon 2 is a standalone application available free of charge under http://hx2.mpimf-heidelberg.mpg.de.
Lindner, Robert; Lou, Xinghua; Reinstein, Jochen; Shoeman, Robert L; Hamprecht, Fred A; Winkler, Andreas
2014-06-01
Hydrogen-deuterium exchange (HDX) experiments analyzed by mass spectrometry (MS) provide information about the dynamics and the solvent accessibility of protein backbone amide hydrogen atoms. Continuous improvement of MS instrumentation has contributed to the increasing popularity of this method; however, comprehensive automated data analysis is only beginning to mature. We present Hexicon 2, an automated pipeline for data analysis and visualization based on the previously published program Hexicon (Lou et al. 2010). Hexicon 2 employs the sensitive NITPICK peak detection algorithm of its predecessor in a divide-and-conquer strategy and adds new features, such as chromatogram alignment and improved peptide sequence assignment. The unique feature of deuteration distribution estimation was retained in Hexicon 2 and improved using an iterative deconvolution algorithm that is robust even to noisy data. In addition, Hexicon 2 provides a data browser that facilitates quality control and provides convenient access to common data visualization tasks. Analysis of a benchmark dataset demonstrates superior performance of Hexicon 2 compared with its predecessor in terms of deuteration centroid recovery and deuteration distribution estimation. Hexicon 2 greatly reduces data analysis time compared with manual analysis, whereas the increased number of peptides provides redundant coverage of the entire protein sequence. Hexicon 2 is a standalone application available free of charge under http://hx2.mpimf-heidelberg.mpg.de.
Asteroid detection using a single multi-wavelength CCD scan
NASA Astrophysics Data System (ADS)
Melton, Jonathan
2016-09-01
Asteroid detection is a topic of great interest due to the possibility of diverting possibly dangerous asteroids or mining potentially lucrative ones. Currently, asteroid detection is generally performed by taking multiple images of the same patch of sky separated by 10-15 minutes, then subtracting the images to find movement. However, this is time consuming because of the need to revisit the same area multiple times per night. This paper describes an algorithm that can detect asteroids using a single CCD camera scan, thus cutting down on the time and cost of an asteroid survey. The algorithm is based on the fact that some telescopes scan the sky at multiple wavelengths with a small time separation between the wavelength components. As a result, an object moving with sufficient speed will appear in different places in different wavelength components of the same image. Using image processing techniques we detect the centroids of points of light in the first component and compare these positions to the centroids in the other components using a nearest neighbor algorithm. The algorithm was used on a test set of 49 images obtained from the Sloan telescope in New Mexico and found 100% of known asteroids with only 3 false positives. This algorithm has the advantage of decreasing the amount of time required to perform an asteroid scan, thus allowing more sky to be scanned in the same amount of time or freeing a telescope for other pursuits.
Precision targeting in guided munition using IR sensor and MmW radar
NASA Astrophysics Data System (ADS)
Sreeja, S.; Hablani, H. B.; Arya, H.
2015-10-01
Conventional munitions are not guided with sensors and therefore miss the target, particularly if the target is mobile. The miss distance of these munitions can be decreased by incorporating sensors to detect the target and guide the munition during flight. This paper is concerned with a Precision Guided Munition(PGM) equipped with an infrared sensor and a millimeter wave radar [IR and MmW, for short]. Three-dimensional flight of the munition and its pitch and yaw motion models are developed and simulated. The forward and lateral motion of a target tank on the ground is modeled as two independent second-order Gauss-Markov process. To estimate the target location on the ground and the line-of-sight rate to intercept it an Extended Kalman Filter is composed whose state vector consists of cascaded state vectors of missile dynamics and target dynamics. The line-of-sight angle measurement from the infrared seeker is by centroiding the target image in 40 Hz. The centroid estimation of the images in the focal plane is at a frequency of 10 Hz. Every 10 Hz, centroids of four consecutive images are averaged, yielding a time-averaged centroid, implying some measurement delay. The miss distance achieved by including by image processing delays is 1:45m.
Precision targeting in guided munition using infrared sensor and millimeter wave radar
NASA Astrophysics Data System (ADS)
Sulochana, Sreeja; Hablani, Hari B.; Arya, Hemendra
2016-07-01
Conventional munitions are not guided with sensors and therefore miss the target, particularly if the target is mobile. The miss distance of these munitions can be decreased by incorporating sensors to detect the target and guide the munition during flight. This paper is concerned with a precision guided munition equipped with an infrared (IR) sensor and a millimeter wave radar (MmW). Three-dimensional flight of the munition and its pitch and yaw motion models are developed and simulated. The forward and lateral motion of a target tank on the ground is modeled as two independent second-order Gauss-Markov processes. To estimate the target location on the ground and the line-of-sight (LOS) rate to intercept it, an extended Kalman filter is composed whose state vector consists of cascaded state vectors of missile dynamics and target dynamics. The LOS angle measurement from the IR seeker is by centroiding the target image in 40 Hz. The centroid estimation of the images in the focal plane is at a frequency of 10 Hz. Every 10 Hz, centroids of four consecutive images are averaged, yielding a time-averaged centroid, implying some measurement delay. The miss distance achieved by including image processing delays is 1.45 m.
The Heterogeneous P-Median Problem for Categorization Based Clustering
ERIC Educational Resources Information Center
Blanchard, Simon J.; Aloise, Daniel; DeSarbo, Wayne S.
2012-01-01
The p-median offers an alternative to centroid-based clustering algorithms for identifying unobserved categories. However, existing p-median formulations typically require data aggregation into a single proximity matrix, resulting in masked respondent heterogeneity. A proposed three-way formulation of the p-median problem explicitly considers…
Optimum threshold selection method of centroid computation for Gaussian spot
NASA Astrophysics Data System (ADS)
Li, Xuxu; Li, Xinyang; Wang, Caixia
2015-10-01
Centroid computation of Gaussian spot is often conducted to get the exact position of a target or to measure wave-front slopes in the fields of target tracking and wave-front sensing. Center of Gravity (CoG) is the most traditional method of centroid computation, known as its low algorithmic complexity. However both electronic noise from the detector and photonic noise from the environment reduces its accuracy. In order to improve the accuracy, thresholding is unavoidable before centroid computation, and optimum threshold need to be selected. In this paper, the model of Gaussian spot is established to analyze the performance of optimum threshold under different Signal-to-Noise Ratio (SNR) conditions. Besides, two optimum threshold selection methods are introduced: TmCoG (using m % of the maximum intensity of spot as threshold), and TkCoG ( usingμn +κσ n as the threshold), μn and σn are the mean value and deviation of back noise. Firstly, their impact on the detection error under various SNR conditions is simulated respectively to find the way to decide the value of k or m. Then, a comparison between them is made. According to the simulation result, TmCoG is superior over TkCoG for the accuracy of selected threshold, and detection error is also lower.
Determination of wavefront structure for a Hartmann wavefront sensor using a phase-retrieval method.
Polo, A; Kutchoukov, V; Bociort, F; Pereira, S F; Urbach, H P
2012-03-26
We apply a phase retrieval algorithm to the intensity pattern of a Hartmann wavefront sensor to measure with enhanced accuracy the phase structure of a Hartmann hole array. It is shown that the rms wavefront error achieved by phase reconstruction is one order of magnitude smaller than the one obtained from a typical centroid algorithm. Experimental results are consistent with a phase measurement performed independently using a Shack-Hartmann wavefront sensor.
Experimental results for correlation-based wavefront sensing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Poyneer, L A; Palmer, D W; LaFortune, K N
2005-07-01
Correlation wave-front sensing can improve Adaptive Optics (AO) system performance in two keys areas. For point-source-based AO systems, Correlation is more accurate, more robust to changing conditions and provides lower noise than a centroiding algorithm. Experimental results from the Lick AO system and the SSHCL laser AO system confirm this. For remote imaging, Correlation enables the use of extended objects for wave-front sensing. Results from short horizontal-path experiments will show algorithm properties and requirements.
PyCCF: Python Cross Correlation Function for reverberation mapping studies
NASA Astrophysics Data System (ADS)
Sun, Mouyuan; Grier, C. J.; Peterson, B. M.
2018-05-01
PyCCF emulates a Fortran program written by B. Peterson for use with reverberation mapping. The code cross correlates two light curves that are unevenly sampled using linear interpolation and measures the peak and centroid of the cross-correlation function. In addition, it is possible to run Monto Carlo iterations using flux randomization and random subset selection (RSS) to produce cross-correlation centroid distributions to estimate the uncertainties in the cross correlation results.
An Adaptive Cross-Correlation Algorithm for Extended-Scene Shack-Hartmann Wavefront Sensing
NASA Technical Reports Server (NTRS)
Sidick, Erkin; Green, Joseph J.; Ohara, Catherine M.; Redding, David C.
2007-01-01
This viewgraph presentation reviews the Adaptive Cross-Correlation (ACC) Algorithm for extended scene-Shack Hartmann wavefront (WF) sensing. A Shack-Hartmann sensor places a lenslet array at a plane conjugate to the WF error source. Each sub-aperture lenslet samples the WF in the corresponding patch of the WF. A description of the ACC algorithm is included. The ACC has several benefits; amongst them are: ACC requires only about 4 image-shifting iterations to achieve 0.01 pixel accuracy and ACC is insensitive to both background light and noise much more robust than centroiding,
W phase source inversion for moderate to large earthquakes (1990-2010)
Duputel, Zacharie; Rivera, Luis; Kanamori, Hiroo; Hayes, Gavin P.
2012-01-01
Rapid characterization of the earthquake source and of its effects is a growing field of interest. Until recently, it still took several hours to determine the first-order attributes of a great earthquake (e.g. Mw≥ 7.5), even in a well-instrumented region. The main limiting factors were data saturation, the interference of different phases and the time duration and spatial extent of the source rupture. To accelerate centroid moment tensor (CMT) determinations, we have developed a source inversion algorithm based on modelling of the W phase, a very long period phase (100–1000 s) arriving at the same time as the P wave. The purpose of this work is to finely tune and validate the algorithm for large-to-moderate-sized earthquakes using three components of W phase ground motion at teleseismic distances. To that end, the point source parameters of all Mw≥ 6.5 earthquakes that occurred between 1990 and 2010 (815 events) are determined using Federation of Digital Seismograph Networks, Global Seismographic Network broad-band stations and STS1 global virtual networks of the Incorporated Research Institutions for Seismology Data Management Center. For each event, a preliminary magnitude obtained from W phase amplitudes is used to estimate the initial moment rate function half duration and to define the corner frequencies of the passband filter that will be applied to the waveforms. Starting from these initial parameters, the seismic moment tensor is calculated using a preliminary location as a first approximation of the centroid. A full CMT inversion is then conducted for centroid timing and location determination. Comparisons with Harvard and Global CMT solutions highlight the robustness of W phase CMT solutions at teleseismic distances. The differences in Mw rarely exceed 0.2 and the source mechanisms are very similar to one another. Difficulties arise when a target earthquake is shortly (e.g. within 10 hr) preceded by another large earthquake, which disturbs the waveforms of the target event. To deal with such difficult situations, we remove the perturbation caused by earlier disturbing events by subtracting the corresponding synthetics from the data. The CMT parameters for the disturbed event can then be retrieved using the residual seismograms. We also explore the feasibility of obtaining source parameters of smaller earthquakes in the range 6.0 ≤Mw w= 6 or larger.
Mars global digital dune database and initial science results
Hayward, R.K.; Mullins, K.F.; Fenton, L.K.; Hare, T.M.; Titus, T.N.; Bourke, M.C.; Colaprete, A.; Christensen, P.R.
2007-01-01
A new Mars Global Digital Dune Database (MGD3) constructed using Thermal Emission Imaging System (THEMIS) infrared (IR) images provides a comprehensive and quantitative view of the geographic distribution of moderate- to large-size dune fields (area >1 kM2) that will help researchers to understand global climatic and sedimentary processes that have shaped the surface of Mars. MGD3 extends from 65??N to 65??S latitude and includes ???550 dune fields, covering ???70,000 km2, with an estimated total volume of ???3,600 km3. This area, when combined with polar dune estimates, suggests moderate- to large-size dune field coverage on Mars may total ???800,000 km2, ???6 times less than the total areal estimate of ???5,000,000 km2 for terrestrial dunes. Where availability and quality of THEMIS visible (VIS) or Mars Orbiter Camera. narrow-angle (MOC NA) images allow, we classify dunes and include dune slipface measurements, which are derived from gross dune morphology and represent the prevailing wind direction at the last time of significant dune modification. For dunes located within craters, the azimuth from crater centroid to dune field centroid (referred to as dune centroid azimuth) is calculated and can provide an accurate method for tracking dune migration within smooth-floored craters. These indicators of wind direction are compared to output from a general circulation model (GCM). Dune centroid azimuth values generally correlate to regional wind patterns. Slipface orientations are less well correlated, suggesting that local topographic effects may play a larger role in dune orientation than regional winds. Copyright 2007 by the American Geophysical Union.
NASA Astrophysics Data System (ADS)
Gao, Xiangdong; Liu, Guiqian
2015-01-01
During deep penetration laser welding, there exist plume (weak plasma) and spatters, which are the results of weld material ejection due to strong laser heating. The characteristics of plume and spatters are related to welding stability and quality. Characteristics of metallic plume and spatters were investigated during high-power disk laser bead-on-plate welding of Type 304 austenitic stainless steel plates at a continuous wave laser power of 10 kW. An ultraviolet and visible sensitive high-speed camera was used to capture the metallic plume and spatter images. Plume area, laser beam path through the plume, swing angle, distance between laser beam focus and plume image centroid, abscissa of plume centroid and spatter numbers are defined as eigenvalues, and the weld bead width was used as a characteristic parameter that reflected welding stability. Welding status was distinguished by SVM (support vector machine) after data normalization and characteristic analysis. Also, PCA (principal components analysis) feature extraction was used to reduce the dimensions of feature space, and PSO (particle swarm optimization) was used to optimize the parameters of SVM. Finally a classification model based on SVM was established to estimate the weld bead width and welding stability. Experimental results show that the established algorithm based on SVM could effectively distinguish the variation of weld bead width, thus providing an experimental example of monitoring high-power disk laser welding quality.
Retinal fundus images for glaucoma analysis: the RIGA dataset
NASA Astrophysics Data System (ADS)
Almazroa, Ahmed; Alodhayb, Sami; Osman, Essameldin; Ramadan, Eslam; Hummadi, Mohammed; Dlaim, Mohammed; Alkatee, Muhannad; Raahemifar, Kaamran; Lakshminarayanan, Vasudevan
2018-03-01
Glaucoma neuropathy is a major cause of irreversible blindness worldwide. Current models of chronic care will not be able to close the gap of growing prevalence of glaucoma and challenges for access to healthcare services. Teleophthalmology is being developed to close this gap. In order to develop automated techniques for glaucoma detection which can be used in tele-ophthalmology we have developed a large retinal fundus dataset. A de-identified dataset of retinal fundus images for glaucoma analysis (RIGA) was derived from three sources for a total of 750 images. The optic cup and disc boundaries for each image was marked and annotated manually by six experienced ophthalmologists and included the cup to disc (CDR) estimates. Six parameters were extracted and assessed (the disc area and centroid, cup area and centroid, horizontal and vertical cup to disc ratios) among the ophthalmologists. The inter-observer annotations were compared by calculating the standard deviation (SD) for every image between the six ophthalmologists in order to determine if the outliers amongst the six and was used to filter the corresponding images. The data set will be made available to the research community in order to crowd source other analysis from other research groups in order to develop, validate and implement analysis algorithms appropriate for tele-glaucoma assessment. The RIGA dataset can be freely accessed online through University of Michigan, Deep Blue website (doi:10.7302/Z23R0R29).
NASA Astrophysics Data System (ADS)
Bansal, A. R.; Anand, S.; Rajaram, M.; Rao, V.; Dimri, V. P.
2012-12-01
The depth to the bottom of the magnetic sources (DBMS) may be used as an estimate of the Curie - point depth. The DBMSs can also be interpreted in term of thermal structure of the crust. The thermal structure of the crust is a sensitive parameter and depends on the many properties of crust e.g. modes of deformation, depths of brittle and ductile deformation zones, regional heat flow variations, seismicity, subsidence/uplift patterns and maturity of organic matter in sedimentary basins. The conventional centroid method of DBMS estimation assumes random uniform uncorrelated distribution of sources and to overcome this limitation a modified centroid method based on fractal distribution has been proposed. We applied this modified centroid method to the aeromagnetic data of the central Indian region and selected 29 half overlapping blocks of dimension 200 km x 200 km covering different parts of the central India. Shallower values of the DBMS are found for the western and southern portion of Indian shield. The DBMSs values are found as low as close to middle crust in the south west Deccan trap and probably deeper than Moho in the Chhatisgarh basin. In few places DBMS are close to the Moho depth found from the seismic study and others places shallower than the Moho. The DBMS indicate complex nature of the Indian crust.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Denney, K. D.; Peterson, B. M.; Horne, Keith
We use the coadded spectra of 32 epochs of Sloan Digital Sky Survey (SDSS) Reverberation Mapping Project observations of 482 quasars with z > 1.46 to highlight systematic biases in the SDSS- and Baryon Oscillation Spectroscopic Survey (BOSS)-pipeline redshifts due to the natural diversity of quasar properties. We investigate the characteristics of this bias by comparing the BOSS-pipeline redshifts to an estimate from the centroid of He ii λ 1640. He ii has a low equivalent width but is often well-defined in high-S/N spectra, does not suffer from self-absorption, and has a narrow component which, when present (the case for aboutmore » half of our sources), produces a redshift estimate that, on average, is consistent with that determined from [O ii] to within the He ii and [O ii] centroid measurement uncertainties. The large redshift differences of ∼1000 km s{sup −1}, on average, between the BOSS-pipeline and He ii-centroid redshifts, suggest there are significant biases in a portion of BOSS quasar redshift measurements. Adopting the He ii-based redshifts shows that C iv does not exhibit a ubiquitous blueshift for all quasars, given the precision probed by our measurements. Instead, we find a distribution of C iv-centroid blueshifts across our sample, with a dynamic range that (i) is wider than that previously reported for this line, and (ii) spans C iv centroids from those consistent with the systemic redshift to those with significant blueshifts of thousands of kilometers per second. These results have significant implications for measurement and use of high-redshift quasar properties and redshifts, and studies based thereon.« less
Airborne target tracking algorithm against oppressive decoys in infrared imagery
NASA Astrophysics Data System (ADS)
Sun, Xiechang; Zhang, Tianxu
2009-10-01
This paper presents an approach for tracking airborne target against oppressive infrared decoys. Oppressive decoy lures infrared guided missile by its high infrared radiation. Traditional tracking algorithms have degraded stability even come to tracking failure when airborne target continuously throw out many decoys. The proposed approach first determines an adaptive tracking window. The center of the tracking window is set at a predicted target position which is computed based on uniform motion model. Different strategies are applied for determination of tracking window size according to target state. The image within tracking window is segmented and multi features of candidate targets are extracted. The most similar candidate target is associated to the tracking target by using a decision function, which calculates a weighted sum of normalized feature differences between two comparable targets. Integrated intensity ratio of association target and tracking target, and target centroid are examined to estimate target state in the presence of decoys. The tracking ability and robustness of proposed approach has been validated by processing available real-world and simulated infrared image sequences containing airborne targets and oppressive decoys.
Anatomy guided automated SPECT renal seed point estimation
NASA Astrophysics Data System (ADS)
Dwivedi, Shekhar; Kumar, Sailendra
2010-04-01
Quantification of SPECT(Single Photon Emission Computed Tomography) images can be more accurate if correct segmentation of region of interest (ROI) is achieved. Segmenting ROI from SPECT images is challenging due to poor image resolution. SPECT is utilized to study the kidney function, though the challenge involved is to accurately locate the kidneys and bladder for analysis. This paper presents an automated method for generating seed point location of both kidneys using anatomical location of kidneys and bladder. The motivation for this work is based on the premise that the anatomical location of the bladder relative to the kidneys will not differ much. A model is generated based on manual segmentation of the bladder and both the kidneys on 10 patient datasets (including sum and max images). Centroid is estimated for manually segmented bladder and kidneys. Relatively easier bladder segmentation is followed by feeding bladder centroid coordinates into the model to generate seed point for kidneys. Percentage error observed in centroid coordinates of organs from ground truth to estimated values from our approach are acceptable. Percentage error of approximately 1%, 6% and 2% is observed in X coordinates and approximately 2%, 5% and 8% is observed in Y coordinates of bladder, left kidney and right kidney respectively. Using a regression model and the location of the bladder, the ROI generation for kidneys is facilitated. The model based seed point estimation will enhance the robustness of kidney ROI estimation for noisy cases.
Improving experimental phases for strong reflections prior to density modification
DOE Office of Scientific and Technical Information (OSTI.GOV)
Uervirojnangkoorn, Monarin; Hilgenfeld, Rolf; Terwilliger, Thomas C.
Experimental phasing of diffraction data from macromolecular crystals involves deriving phase probability distributions. These distributions are often bimodal, making their weighted average, the centroid phase, improbable, so that electron-density maps computed using centroid phases are often non-interpretable. Density modification brings in information about the characteristics of electron density in protein crystals. In successful cases, this allows a choice between the modes in the phase probability distributions, and the maps can cross the borderline between non-interpretable and interpretable. Based on the suggestions by Vekhter [Vekhter (2005), Acta Cryst. D 61, 899–902], the impact of identifying optimized phases for a small numbermore » of strong reflections prior to the density-modification process was investigated while using the centroid phase as a starting point for the remaining reflections. A genetic algorithm was developed that optimizes the quality of such phases using the skewness of the density map as a target function. Phases optimized in this way are then used in density modification. In most of the tests, the resulting maps were of higher quality than maps generated from the original centroid phases. In one of the test cases, the new method sufficiently improved a marginal set of experimental SAD phases to enable successful map interpretation. Lastly, a computer program, SISA, has been developed to apply this method for phase improvement in macromolecular crystallography.« less
Improving experimental phases for strong reflections prior to density modification
Uervirojnangkoorn, Monarin; Hilgenfeld, Rolf; Terwilliger, Thomas C.; ...
2013-09-20
Experimental phasing of diffraction data from macromolecular crystals involves deriving phase probability distributions. These distributions are often bimodal, making their weighted average, the centroid phase, improbable, so that electron-density maps computed using centroid phases are often non-interpretable. Density modification brings in information about the characteristics of electron density in protein crystals. In successful cases, this allows a choice between the modes in the phase probability distributions, and the maps can cross the borderline between non-interpretable and interpretable. Based on the suggestions by Vekhter [Vekhter (2005), Acta Cryst. D 61, 899–902], the impact of identifying optimized phases for a small numbermore » of strong reflections prior to the density-modification process was investigated while using the centroid phase as a starting point for the remaining reflections. A genetic algorithm was developed that optimizes the quality of such phases using the skewness of the density map as a target function. Phases optimized in this way are then used in density modification. In most of the tests, the resulting maps were of higher quality than maps generated from the original centroid phases. In one of the test cases, the new method sufficiently improved a marginal set of experimental SAD phases to enable successful map interpretation. Lastly, a computer program, SISA, has been developed to apply this method for phase improvement in macromolecular crystallography.« less
Marouane, H; Shirazi-Adl, A; Adouni, M
2016-01-25
Evaluation of contact forces-centers of the tibiofemoral joint in gait has crucial biomechanical and pathological consequences. It involves however difficulties and limitations in in vitro cadaver and in vivo imaging studies. The goal is to estimate total contact forces (CF) and location of contact centers (CC) on the medial and lateral plateaus using results computed by a validated finite element model simulating the stance phase of gait for normal as well as osteoarthritis, varus-valgus and posterior tibial slope altered subjects. Using foregoing contact results, six methods commonly used in the literature are also applied to estimate and compare locations of CC at 6 periods of stance phase (0%, 5%, 25%, 50%, 75% and 100%). TF joint contact forces are greater on the lateral plateau very early in stance and on the medial plateau thereafter during 25-100% stance periods. Large excursions in the location of CC (>17mm), especially on the medial plateau in the mediolateral direction, are computed. Various reported models estimate quite different CCs with much greater variations (~15mm) in the mediolateral direction on both plateaus. Compared to our accurately computed CCs taken as the gold standard, the centroid of contact area algorithm yielded least differences (except in the mediolateral direction on the medial plateau at ~5mm) whereas the contact point and weighted center of proximity algorithms resulted overall in greatest differences. Large movements in the location of CC should be considered when attempting to estimate TF compartmental contact forces in gait. Copyright © 2015 Elsevier Ltd. All rights reserved.
Zhu, Zhaoyi; Mu, Quanquan; Li, Dayu; Yang, Chengliang; Cao, Zhaoliang; Hu, Lifa; Xuan, Li
2016-10-17
The centroid-based Shack-Hartmann wavefront sensor (SHWFS) treats the sampled wavefronts in the sub-apertures as planes, and the slopes of the sub-wavefronts are used to reconstruct the whole pupil wavefront. The problem is that the centroid method may fail to sense the high-order modes for strong turbulences, decreasing the precision of the whole pupil wavefront reconstruction. To solve this problem, we propose a sub-wavefront estimation method for SHWFS based on the focal plane sensing technique, by which more Zernike modes than the two slopes can be sensed in each sub-aperture. In this paper, the effects on the sub-wavefront estimation method of the related parameters, such as the spot size, the phase offset with its set amplitude and the pixels number in each sub-aperture, are analyzed and these parameters are optimized to achieve high efficiency. After the optimization, open-loop measurement is realized. For the sub-wavefront sensing, we achieve a large linearity range of 3.0 rad RMS for Zernike modes Z2 and Z3, and 2.0 rad RMS for Zernike modes Z4 to Z6 when the pixel number does not exceed 8 × 8 in each sub-aperture. The whole pupil wavefront reconstruction with the modified SHWFS is realized to analyze the improvements brought by the optimized sub-wavefront estimation method. Sixty-five Zernike modes can be reconstructed with a modified SHWFS containing only 7 × 7 sub-apertures, which could reconstruct only 35 modes by the centroid method, and the mean RMS errors of the residual phases are less than 0.2 rad2, which is lower than the 0.35 rad2 by the centroid method.
Basic firefly algorithm for document clustering
NASA Astrophysics Data System (ADS)
Mohammed, Athraa Jasim; Yusof, Yuhanis; Husni, Husniza
2015-12-01
The Document clustering plays significant role in Information Retrieval (IR) where it organizes documents prior to the retrieval process. To date, various clustering algorithms have been proposed and this includes the K-means and Particle Swarm Optimization. Even though these algorithms have been widely applied in many disciplines due to its simplicity, such an approach tends to be trapped in a local minimum during its search for an optimal solution. To address the shortcoming, this paper proposes a Basic Firefly (Basic FA) algorithm to cluster text documents. The algorithm employs the Average Distance to Document Centroid (ADDC) as the objective function of the search. Experiments utilizing the proposed algorithm were conducted on the 20Newsgroups benchmark dataset. Results demonstrate that the Basic FA generates a more robust and compact clusters than the ones produced by K-means and Particle Swarm Optimization (PSO).
Near real-time estimation of the seismic source parameters in a compressed domain
NASA Astrophysics Data System (ADS)
Rodriguez, Ismael A. Vera
Seismic events can be characterized by its origin time, location and moment tensor. Fast estimations of these source parameters are important in areas of geophysics like earthquake seismology, and the monitoring of seismic activity produced by volcanoes, mining operations and hydraulic injections in geothermal and oil and gas reservoirs. Most available monitoring systems estimate the source parameters in a sequential procedure: first determining origin time and location (e.g., epicentre, hypocentre or centroid of the stress glut density), and then using this information to initialize the evaluation of the moment tensor. A more efficient estimation of the source parameters requires a concurrent evaluation of the three variables. The main objective of the present thesis is to address the simultaneous estimation of origin time, location and moment tensor of seismic events. The proposed method displays the benefits of being: 1) automatic, 2) continuous and, depending on the scale of application, 3) of providing results in real-time or near real-time. The inversion algorithm is based on theoretical results from sparse representation theory and compressive sensing. The feasibility of implementation is determined through the analysis of synthetic and real data examples. The numerical experiments focus on the microseismic monitoring of hydraulic fractures in oil and gas wells, however, an example using real earthquake data is also presented for validation. The thesis is complemented with a resolvability analysis of the moment tensor. The analysis targets common monitoring geometries employed in hydraulic fracturing in oil wells. Additionally, it is presented an application of sparse representation theory for the denoising of one-component and three-component microseismicity records, and an algorithm for improved automatic time-picking using non-linear inversion constraints.
Use of three-point taper systems in timber cruising
James W. Flewelling; Richard L. Ernst; Lawrence M. Raynes
2000-01-01
Tree volumes and profiles are often estimated as functions of total height and DBH. Alternative estimators include form-class methods, importance sampling, the centroid method, and multi-point profile (taper) estimation systems; all of these require some measurement or estimate of upper stem diameters. The multi-point profile system discussed here allows for upper stem...
Quantifying Void Ratio in Granular Materials Using Voronoi Tessellation
NASA Technical Reports Server (NTRS)
Alshibli, Khalid A.; El-Saidany, Hany A.; Rose, M. Franklin (Technical Monitor)
2000-01-01
Voronoi technique was used to calculate the local void ratio distribution of granular materials. It was implemented in an application-oriented image processing and analysis algorithm capable of extracting object edges, separating adjacent particles, obtaining the centroid of each particle, generating Voronoi polygons, and calculating the local void ratio. Details of the algorithm capabilities and features are presented. Verification calculations included performing manual digitization of synthetic images using Oda's method and Voronoi polygon system. The developed algorithm yielded very accurate measurements of the local void ratio distribution. Voronoi tessellation has the advantage, compared to Oda's method, of offering a well-defined polygon generation criterion that can be implemented in an algorithm to automatically calculate local void ratio of particulate materials.
Optimal Partitioning of a Data Set Based on the "p"-Median Model
ERIC Educational Resources Information Center
Brusco, Michael J.; Kohn, Hans-Friedrich
2008-01-01
Although the "K"-means algorithm for minimizing the within-cluster sums of squared deviations from cluster centroids is perhaps the most common method for applied cluster analyses, a variety of other criteria are available. The "p"-median model is an especially well-studied clustering problem that requires the selection of "p" objects to serve as…
Design of long induction linacs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Caporaso, G.J.; Cole, A.G.
1990-09-06
A self-consistent design strategy for induction linacs is presented which addresses the issues of brightness preservation against space charge induced emittance growth, minimization of the beam breakup instability and the suppression of beam centroid motion due to chromatic effects (corkscrew) and misaligned focusing elements. A simple steering algorithm is described that widens the effective energy bandwidth of the transport system.
ACE: Automatic Centroid Extractor for real time target tracking
NASA Technical Reports Server (NTRS)
Cameron, K.; Whitaker, S.; Canaris, J.
1990-01-01
A high performance video image processor has been implemented which is capable of grouping contiguous pixels from a raster scan image into groups and then calculating centroid information for each object in a frame. The algorithm employed to group pixels is very efficient and is guaranteed to work properly for all convex shapes as well as most concave shapes. Processing speeds are adequate for real time processing of video images having a pixel rate of up to 20 million pixels per second. Pixels may be up to 8 bits wide. The processor is designed to interface directly to a transputer serial link communications channel with no additional hardware. The full custom VLSI processor was implemented in a 1.6 mu m CMOS process and measures 7200 mu m on a side.
Nonlinear Motion Tracking by Deep Learning Architecture
NASA Astrophysics Data System (ADS)
Verma, Arnav; Samaiya, Devesh; Gupta, Karunesh K.
2018-03-01
In the world of Artificial Intelligence, object motion tracking is one of the major problems. The extensive research is being carried out to track people in crowd. This paper presents a unique technique for nonlinear motion tracking in the absence of prior knowledge of nature of nonlinear path that the object being tracked may follow. We achieve this by first obtaining the centroid of the object and then using the centroid as the current example for a recurrent neural network trained using real-time recurrent learning. We have tweaked the standard algorithm slightly and have accumulated the gradient for few previous iterations instead of using just the current iteration as is the norm. We show that for a single object, such a recurrent neural network is highly capable of approximating the nonlinearity of its path.
A comparative study of automatic image segmentation algorithms for target tracking in MR-IGRT.
Feng, Yuan; Kawrakow, Iwan; Olsen, Jeff; Parikh, Parag J; Noel, Camille; Wooten, Omar; Du, Dongsu; Mutic, Sasa; Hu, Yanle
2016-03-08
On-board magnetic resonance (MR) image guidance during radiation therapy offers the potential for more accurate treatment delivery. To utilize the real-time image information, a crucial prerequisite is the ability to successfully segment and track regions of interest (ROI). The purpose of this work is to evaluate the performance of different segmentation algorithms using motion images (4 frames per second) acquired using a MR image-guided radiotherapy (MR-IGRT) system. Manual con-tours of the kidney, bladder, duodenum, and a liver tumor by an experienced radiation oncologist were used as the ground truth for performance evaluation. Besides the manual segmentation, images were automatically segmented using thresholding, fuzzy k-means (FKM), k-harmonic means (KHM), and reaction-diffusion level set evolution (RD-LSE) algorithms, as well as the tissue tracking algorithm provided by the ViewRay treatment planning and delivery system (VR-TPDS). The performance of the five algorithms was evaluated quantitatively by comparing with the manual segmentation using the Dice coefficient and target registration error (TRE) measured as the distance between the centroid of the manual ROI and the centroid of the automatically segmented ROI. All methods were able to successfully segment the bladder and the kidney, but only FKM, KHM, and VR-TPDS were able to segment the liver tumor and the duodenum. The performance of the thresholding, FKM, KHM, and RD-LSE algorithms degraded as the local image contrast decreased, whereas the performance of the VP-TPDS method was nearly independent of local image contrast due to the reference registration algorithm. For segmenting high-contrast images (i.e., kidney), the thresholding method provided the best speed (< 1 ms) with a satisfying accuracy (Dice = 0.95). When the image contrast was low, the VR-TPDS method had the best automatic contour. Results suggest an image quality determination procedure before segmentation and a combination of different methods for optimal segmentation with the on-board MR-IGRT system.
Seismogeodesy and Rapid Earthquake and Tsunami Source Assessment
NASA Astrophysics Data System (ADS)
Melgar Moctezuma, Diego
This dissertation presents an optimal combination algorithm for strong motion seismograms and regional high rate GPS recordings. This seismogeodetic solution produces estimates of ground motion that recover the whole seismic spectrum, from the permanent deformation to the Nyquist frequency of the accelerometer. This algorithm will be demonstrated and evaluated through outdoor shake table tests and recordings of large earthquakes, notably the 2010 Mw 7.2 El Mayor-Cucapah earthquake and the 2011 Mw 9.0 Tohoku-oki events. This dissertations will also show that strong motion velocity and displacement data obtained from the seismogeodetic solution can be instrumental to quickly determine basic parameters of the earthquake source. We will show how GPS and seismogeodetic data can produce rapid estimates of centroid moment tensors, static slip inversions, and most importantly, kinematic slip inversions. Throughout the dissertation special emphasis will be placed on how to compute these source models with minimal interaction from a network operator. Finally we will show that the incorporation of off-shore data such as ocean-bottom pressure and RTK-GPS buoys can better-constrain the shallow slip of large subduction events. We will demonstrate through numerical simulations of tsunami propagation that the earthquake sources derived from the seismogeodetic and ocean-based sensors is detailed enough to provide a timely and accurate assessment of expected tsunami intensity immediately following a large earthquake.
A Model-Based Approach for the Measurement of Eye Movements Using Image Processing
NASA Technical Reports Server (NTRS)
Sung, Kwangjae; Reschke, Millard F.
1997-01-01
This paper describes a video eye-tracking algorithm which searches for the best fit of the pupil modeled as a circular disk. The algorithm is robust to common image artifacts such as the droopy eyelids and light reflections while maintaining the measurement resolution available by the centroid algorithm. The presented algorithm is used to derive the pupil size and center coordinates, and can be combined with iris-tracking techniques to measure ocular torsion. A comparison search method of pupil candidates using pixel coordinate reference lookup tables optimizes the processing requirements for a least square fit of the circular disk model. This paper includes quantitative analyses and simulation results for the resolution and the robustness of the algorithm. The algorithm presented in this paper provides a platform for a noninvasive, multidimensional eye measurement system which can be used for clinical and research applications requiring the precise recording of eye movements in three-dimensional space.
Performance analysis of multiple PRF technique for ambiguity resolution
NASA Technical Reports Server (NTRS)
Chang, C. Y.; Curlander, J. C.
1992-01-01
For short wavelength spaceborne synthetic aperture radar (SAR), ambiguity in Doppler centroid estimation occurs when the azimuth squint angle uncertainty is larger than the azimuth antenna beamwidth. Multiple pulse recurrence frequency (PRF) hopping is a technique developed to resolve the ambiguity by operating the radar in different PRF's in the pre-imaging sequence. Performance analysis results of the multiple PRF technique are presented, given the constraints of the attitude bound, the drift rate uncertainty, and the arbitrary numerical values of PRF's. The algorithm performance is derived in terms of the probability of correct ambiguity resolution. Examples, using the Shuttle Imaging Radar-C (SIR-C) and X-SAR parameters, demonstrate that the probability of correct ambiguity resolution obtained by the multiple PRF technique is greater than 95 percent and 80 percent for the SIR-C and X-SAR applications, respectively. The success rate is significantly higher than that achieved by the range cross correlation technique.
NASA Astrophysics Data System (ADS)
Yong, Yan Ling; Tan, Li Kuo; McLaughlin, Robert A.; Chee, Kok Han; Liew, Yih Miin
2017-12-01
Intravascular optical coherence tomography (OCT) is an optical imaging modality commonly used in the assessment of coronary artery diseases during percutaneous coronary intervention. Manual segmentation to assess luminal stenosis from OCT pullback scans is challenging and time consuming. We propose a linear-regression convolutional neural network to automatically perform vessel lumen segmentation, parameterized in terms of radial distances from the catheter centroid in polar space. Benchmarked against gold-standard manual segmentation, our proposed algorithm achieves average locational accuracy of the vessel wall of 22 microns, and 0.985 and 0.970 in Dice coefficient and Jaccard similarity index, respectively. The average absolute error of luminal area estimation is 1.38%. The processing rate is 40.6 ms per image, suggesting the potential to be incorporated into a clinical workflow and to provide quantitative assessment of vessel lumen in an intraoperative time frame.
Centroid stabilization in alignment of FOA corner cube: designing of a matched filter
NASA Astrophysics Data System (ADS)
Awwal, Abdul; Wilhelmsen, Karl; Roberts, Randy; Leach, Richard; Miller Kamm, Victoria; Ngo, Tony; Lowe-Webb, Roger
2015-02-01
The current automation of image-based alignment of NIF high energy laser beams is providing the capability of executing multiple target shots per day. An important aspect of performing multiple shots in a day is to reduce additional time spent aligning specific beams due to perturbations in those beam images. One such alignment is beam centration through the second and third harmonic generating crystals in the final optics assembly (FOA), which employs two retro-reflecting corner cubes to represent the beam center. The FOA houses the frequency conversion crystals for third harmonic generation as the beams enters the target chamber. Beam-to-beam variations and systematic beam changes over time in the FOA corner-cube images can lead to a reduction in accuracy as well as increased convergence durations for the template based centroid detector. This work presents a systematic approach of maintaining FOA corner cube centroid templates so that stable position estimation is applied thereby leading to fast convergence of alignment control loops. In the matched filtering approach, a template is designed based on most recent images taken in the last 60 days. The results show that new filter reduces the divergence of the position estimation of FOA images.
Algorithms for computations of Loday algebras' invariants
NASA Astrophysics Data System (ADS)
Hussain, Sharifah Kartini Said; Rakhimov, I. S.; Basri, W.
2017-04-01
The paper is devoted to applications of some computer programs to study structural determination of Loday algebras. We present how these computer programs can be applied in computations of various invariants of Loday algebras and provide several computer programs in Maple to verify Loday algebras' identities, the isomorphisms between the algebras, as a special case, to describe the automorphism groups, centroids and derivations.
Laser guide star wavefront sensing for ground-layer adaptive optics on extremely large telescopes.
Clare, Richard M; Le Louarn, Miska; Béchet, Clementine
2011-02-01
We propose ground-layer adaptive optics (GLAO) to improve the seeing on the 42 m European Extremely Large Telescope. Shack-Hartmann wavefront sensors (WFSs) with laser guide stars (LGSs) will experience significant spot elongation due to off-axis observation. This spot elongation influences the design of the laser launch location, laser power, WFS detector, and centroiding algorithm for LGS GLAO on an extremely large telescope. We show, using end-to-end numerical simulations, that with a noise-weighted matrix-vector-multiply reconstructor, the performance in terms of 50% ensquared energy (EE) of the side and central launch of the lasers is equivalent, the matched filter and weighted center of gravity centroiding algorithms are the most promising, and approximately 10×10 undersampled pixels are optimal. Significant improvement in the 50% EE can be observed with a few tens of photons/subaperture/frame, and no significant gain is seen by adding more than 200 photons/subaperture/frame. The LGS GLAO is not particularly sensitive to the sodium profile present in the mesosphere nor to a short-timescale (less than 100 s) evolution of the sodium profile. The performance of LGS GLAO is, however, sensitive to the atmospheric turbulence profile.
NASA Astrophysics Data System (ADS)
Haka, Abigail S.; Kidder, Linda H.; Lewis, E. Neil
2001-07-01
We have applied Fourier transform infrared (FTIR) spectroscopic imaging, coupling a mercury cadmium telluride (MCT) focal plane array detector (FPA) and a Michelson step scan interferometer, to the investigation of various states of malignant human prostate tissue. The MCT FPA used consists of 64x64 pixels, each 61 micrometers 2, and has a spectral range of 2-10.5 microns. Each imaging data set was collected at 16-1 resolution, resulting in 512 image planes and a total of 4096 interferograms. In this article we describe a method for separating different tissue types contained within FTIR spectroscopic imaging data sets of human prostate tissue biopsies. We present images, generated by the Fuzzy C-Means clustering algorithm, which demonstrate the successful partitioning of distinct tissue type domains. Additionally, analysis of differences in the centroid spectra corresponding to different tissue types provides an insight into their biochemical composition. Lastly, we demonstrate the ability to partition tissue type regions in a different data set using centroid spectra calculated from the original data set. This has implications for the use of the Fuzzy C-Means algorithm as an automated technique for the separation and examination of tissue domains in biopsy samples.
NASA Astrophysics Data System (ADS)
Loboda, I. P.; Bogachev, S. A.
2015-07-01
We employ an automated detection algorithm to perform a global study of solar prominence characteristics. We process four months of TESIS observations in the He II 304Å line taken close to the solar minimum of 2008-2009 and mainly focus on quiescent and quiescent-eruptive prominences. We detect a total of 389 individual features ranging from 25×25 to 150×500 Mm2 in size and obtain distributions of many of their spatial characteristics, such as latitudinal position, height, size, and shape. To study their dynamics, we classify prominences as either stable or eruptive and calculate their average centroid velocities, which are found to rarely exceed 3 km/s. In addition, we give rough estimates of mass and gravitational energy for every detected prominence and use these values to estimate the total mass and gravitational energy of all simultaneously existing prominences (1012 - 1014 kg and 1029 - 1031 erg). Finally, we investigate the form of the gravitational energy spectrum of prominences and derive it to be a power-law of index -1.1 ± 0.2.
Zhu, Qingyuan; Xiao, Chunsheng; Hu, Huosheng; Liu, Yuanhui; Wu, Jinjin
2018-01-13
Articulated wheel loaders used in the construction industry are heavy vehicles and have poor stability and a high rate of accidents because of the unpredictable changes of their body posture, mass and centroid position in complex operation environments. This paper presents a novel distributed multi-sensor system for real-time attitude estimation and stability measurement of articulated wheel loaders to improve their safety and stability. Four attitude and heading reference systems (AHRS) are constructed using micro-electro-mechanical system (MEMS) sensors, and installed on the front body, rear body, rear axis and boom of an articulated wheel loader to detect its attitude. A complementary filtering algorithm is deployed for sensor data fusion in the system so that steady state margin angle (SSMA) can be measured in real time and used as the judge index of rollover stability. Experiments are conducted on a prototype wheel loader, and results show that the proposed multi-sensor system is able to detect potential unstable states of an articulated wheel loader in real-time and with high accuracy.
Xiao, Chunsheng; Liu, Yuanhui; Wu, Jinjin
2018-01-01
Articulated wheel loaders used in the construction industry are heavy vehicles and have poor stability and a high rate of accidents because of the unpredictable changes of their body posture, mass and centroid position in complex operation environments. This paper presents a novel distributed multi-sensor system for real-time attitude estimation and stability measurement of articulated wheel loaders to improve their safety and stability. Four attitude and heading reference systems (AHRS) are constructed using micro-electro-mechanical system (MEMS) sensors, and installed on the front body, rear body, rear axis and boom of an articulated wheel loader to detect its attitude. A complementary filtering algorithm is deployed for sensor data fusion in the system so that steady state margin angle (SSMA) can be measured in real time and used as the judge index of rollover stability. Experiments are conducted on a prototype wheel loader, and results show that the proposed multi-sensor system is able to detect potential unstable states of an articulated wheel loader in real-time and with high accuracy. PMID:29342850
NASA Technical Reports Server (NTRS)
Han, Shin-Chan; Riva, Ricccardo; Sauber, Jeanne; Okal, Emile
2013-01-01
We quantify gravity changes after great earthquakes present within the 10 year long time series of monthly Gravity Recovery and Climate Experiment (GRACE) gravity fields. Using spherical harmonic normal-mode formulation, the respective source parameters of moment tensor and double-couple were estimated. For the 2004 Sumatra-Andaman earthquake, the gravity data indicate a composite moment of 1.2x10(exp 23)Nm with a dip of 10deg, in agreement with the estimate obtained at ultralong seismic periods. For the 2010 Maule earthquake, the GRACE solutions range from 2.0 to 2.7x10(exp 22)Nm for dips of 12deg-24deg and centroid depths within the lower crust. For the 2011 Tohoku-Oki earthquake, the estimated scalar moments range from 4.1 to 6.1x10(exp 22)Nm, with dips of 9deg-19deg and centroid depths within the lower crust. For the 2012 Indian Ocean strike-slip earthquakes, the gravity data delineate a composite moment of 1.9x10(exp 22)Nm regardless of the centroid depth, comparing favorably with the total moment of the main ruptures and aftershocks. The smallest event we successfully analyzed with GRACE was the 2007 Bengkulu earthquake with M(sub 0) approx. 5.0x10(exp 21)Nm. We found that the gravity data constrain the focal mechanism with the centroid only within the upper and lower crustal layers for thrust events. Deeper sources (i.e., in the upper mantle) could not reproduce the gravity observation as the larger rigidity and bulk modulus at mantle depths inhibit the interior from changing its volume, thus reducing the negative gravity component. Focal mechanisms and seismic moments obtained in this study represent the behavior of the sources on temporal and spatial scales exceeding the seismic and geodetic spectrum.
Classifying epileptic EEG signals with delay permutation entropy and Multi-Scale K-means.
Zhu, Guohun; Li, Yan; Wen, Peng Paul; Wang, Shuaifang
2015-01-01
Most epileptic EEG classification algorithms are supervised and require large training datasets, that hinder their use in real time applications. This chapter proposes an unsupervised Multi-Scale K-means (MSK-means) MSK-means algorithm to distinguish epileptic EEG signals and identify epileptic zones. The random initialization of the K-means algorithm can lead to wrong clusters. Based on the characteristics of EEGs, the MSK-means MSK-means algorithm initializes the coarse-scale centroid of a cluster with a suitable scale factor. In this chapter, the MSK-means algorithm is proved theoretically superior to the K-means algorithm on efficiency. In addition, three classifiers: the K-means, MSK-means MSK-means and support vector machine (SVM), are used to identify seizure and localize epileptogenic zone using delay permutation entropy features. The experimental results demonstrate that identifying seizure with the MSK-means algorithm and delay permutation entropy achieves 4. 7 % higher accuracy than that of K-means, and 0. 7 % higher accuracy than that of the SVM.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Su, Kuan-Hao; Hu, Lingzhi; Traughber, Melanie
Purpose: MR-based pseudo-CT has an important role in MR-based radiation therapy planning and PET attenuation correction. The purpose of this study is to establish a clinically feasible approach, including image acquisition, correction, and CT formation, for pseudo-CT generation of the brain using a single-acquisition, undersampled ultrashort echo time (UTE)-mDixon pulse sequence. Methods: Nine patients were recruited for this study. For each patient, a 190-s, undersampled, single acquisition UTE-mDixon sequence of the brain was acquired (TE = 0.1, 1.5, and 2.8 ms). A novel method of retrospective trajectory correction of the free induction decay (FID) signal was performed based on point-spreadmore » functions of three external MR markers. Two-point Dixon images were reconstructed using the first and second echo data (TE = 1.5 and 2.8 ms). R2{sup ∗} images (1/T2{sup ∗}) were then estimated and were used to provide bone information. Three image features, i.e., Dixon-fat, Dixon-water, and R2{sup ∗}, were used for unsupervised clustering. Five tissue clusters, i.e., air, brain, fat, fluid, and bone, were estimated using the fuzzy c-means (FCM) algorithm. A two-step, automatic tissue-assignment approach was proposed and designed according to the prior information of the given feature space. Pseudo-CTs were generated by a voxelwise linear combination of the membership functions of the FCM. A low-dose CT was acquired for each patient and was used as the gold standard for comparison. Results: The contrast and sharpness of the FID images were improved after trajectory correction was applied. The mean of the estimated trajectory delay was 0.774 μs (max: 1.350 μs; min: 0.180 μs). The FCM-estimated centroids of different tissue types showed a distinguishable pattern for different tissues, and significant differences were found between the centroid locations of different tissue types. Pseudo-CT can provide additional skull detail and has low bias and absolute error of estimated CT numbers of voxels (−22 ± 29 HU and 130 ± 16 HU) when compared to low-dose CT. Conclusions: The MR features generated by the proposed acquisition, correction, and processing methods may provide representative clustering information and could thus be used for clinical pseudo-CT generation.« less
NASA Astrophysics Data System (ADS)
Bradley, Larry; Sipocz, Brigitta; Robitaille, Thomas; Tollerud, Erik; Deil, Christoph; Vinícius, Zè; Barbary, Kyle; Günther, Hans Moritz; Bostroem, Azalee; Droettboom, Michael; Bray, Erik; Bratholm, Lars Andersen; Pickering, T. E.; Craig, Matt; Pascual, Sergio; Greco, Johnny; Donath, Axel; Kerzendorf, Wolfgang; Littlefair, Stuart; Barentsen, Geert; D'Eugenio, Francesco; Weaver, Benjamin Alan
2016-09-01
Photutils provides tools for detecting and performing photometry of astronomical sources. It can estimate the background and background rms in astronomical images, detect sources in astronomical images, estimate morphological parameters of those sources (e.g., centroid and shape parameters), and perform aperture and PSF photometry. Written in Python, it is an affiliated package of Astropy (ascl:1304.002).
NASA Astrophysics Data System (ADS)
Lu, Shan; Zhang, Hanmo
2016-01-01
To meet the requirement of autonomous orbit determination, this paper proposes a fast curve fitting method based on earth ultraviolet features to obtain accurate earth vector direction, in order to achieve the high precision autonomous navigation. Firstly, combining the stable characters of earth ultraviolet radiance and the use of transmission model software of atmospheric radiation, the paper simulates earth ultraviolet radiation model on different time and chooses the proper observation band. Then the fast improved edge extracting method combined Sobel operator and local binary pattern (LBP) is utilized, which can both eliminate noises efficiently and extract earth ultraviolet limb features accurately. And earth's centroid locations on simulated images are estimated via the least square fitting method using part of the limb edges. Taken advantage of the estimated earth vector direction and earth distance, Extended Kalman Filter (EKF) is applied to realize the autonomous navigation finally. Experiment results indicate the proposed method can achieve a sub-pixel earth centroid location estimation and extremely enhance autonomous celestial navigation precision.
Mitigation of time-varying distortions in Nyquist-WDM systems using machine learning
NASA Astrophysics Data System (ADS)
Granada Torres, Jhon J.; Varughese, Siddharth; Thomas, Varghese A.; Chiuchiarelli, Andrea; Ralph, Stephen E.; Cárdenas Soto, Ana M.; Guerrero González, Neil
2017-11-01
We propose a machine learning-based nonsymmetrical demodulation technique relying on clustering to mitigate time-varying distortions derived from several impairments such as IQ imbalance, bias drift, phase noise and interchannel interference. Experimental results show that those impairments cause centroid movements in the received constellations seen in time-windows of 10k symbols in controlled scenarios. In our demodulation technique, the k-means algorithm iteratively identifies the cluster centroids in the constellation of the received symbols in short time windows by means of the optimization of decision thresholds for a minimum BER. We experimentally verified the effectiveness of this computationally efficient technique in multicarrier 16QAM Nyquist-WDM systems over 270 km links. Our nonsymmetrical demodulation technique outperforms the conventional QAM demodulation technique, reducing the OSNR requirement up to ∼0.8 dB at a BER of 1 × 10-2 for signals affected by interchannel interference.
The Algorithm for MODIS Wavelength On-Orbit Calibration Using the SRCA
NASA Technical Reports Server (NTRS)
Montgomery, Harry; Che, Nianzeng; Parker, Kirsten; Bowser, Jeff
1998-01-01
The Spectro-Radiometric Calibration Assembly (SRCA) provides on-orbit spectral calibration of the MODerate resolution Imaging Spectroradiometer (MODIS) reflected solar bands and this paper describes how it is accomplished. The SRCA has two adjacent exit slits: 1) Main slit and 2) Calibration slit. The output from the main slit is measured by a reference silicon photo-diode (SIPD) and then passes through the MODIS. The output from the calibration slit passes through a piece of didymium transmission glass and then it is measured by a calibration SIPD. The centroids of the sharp spectral peaks of a didymium glass are utilized as wavelength standards. After normalization using the reference SIPD signal to eliminate the effects of the illuminating source spectra, the calibration SIPD establishes the relationship between the peaks of the didymium spectra and the grating angle; this is accomplished through the grating equation. In the grating equation the monochromator parameters, Beta (half angle between the incident and diffractive beams) and Theta(sub off) (offset angle of the grating motor) are determined by matching, in a least square sense, the known centroid wavelengths of the didymium peaks and the calculated centroid grating angles from the calibration SIPD signals for the peaks. A displacement between the calibration SIPD and the reference SIPD complicates the signal processing.
Discrimination of different sub-basins on Tajo River based on water influence factor
NASA Astrophysics Data System (ADS)
Bermudez, R.; Gascó, J. M.; Tarquis, A. M.; Saa-Requejo, A.
2009-04-01
Numeric taxonomy has been applied to classify Tajo basin water (Spain) till Portugal border. Several stations, a total of 52, that estimate 15 water variables have been used in this study. The different groups have been obtained applying a Euclidean distance among stations (distance classification) and a Euclidean distance between each station and the centroid estimated among them (centroid classification), varying the number of parameters and with or without variable typification. In order to compare the classification a log-log relation has been established, between number of groups created and distances, to select the best one. It has been observed that centroid classification is more appropriate following in a more logic way the natural constrictions than the minimum distance among stations. Variable typification doesn't improve the classification except when the centroid method is applied. Taking in consideration the ions and the sum of them as variables, the classification improved. Stations are grouped based on electric conductivity (CE), total anions (TA), total cations (TC) and ions ratio (Na/Ca and Mg/Ca). For a given classification and comparing the different groups created a certain variation in ions concentration and ions ratio are observed. However, the variation in each ion among groups is different depending on the case. For the last group, regardless the classification, the increase in all ions is general. Comparing the dendrograms, and groups that originated, Tajo river basin can be sub dived in five sub-basins differentiated by the main influence on water: 1. With a higher ombrogenic influence (rain fed). 2. With ombrogenic and pedogenic influence (rain and groundwater fed). 3. With pedogenic influence. 4. With lithogenic influence (geological bedrock). 5. With a higher ombrogenic and lithogenic influence added.
A comparative study of automatic image segmentation algorithms for target tracking in MR-IGRT.
Feng, Yuan; Kawrakow, Iwan; Olsen, Jeff; Parikh, Parag J; Noel, Camille; Wooten, Omar; Du, Dongsu; Mutic, Sasa; Hu, Yanle
2016-03-01
On-board magnetic resonance (MR) image guidance during radiation therapy offers the potential for more accurate treatment delivery. To utilize the real-time image information, a crucial prerequisite is the ability to successfully segment and track regions of interest (ROI). The purpose of this work is to evaluate the performance of different segmentation algorithms using motion images (4 frames per second) acquired using a MR image-guided radiotherapy (MR-IGRT) system. Manual contours of the kidney, bladder, duodenum, and a liver tumor by an experienced radiation oncologist were used as the ground truth for performance evaluation. Besides the manual segmentation, images were automatically segmented using thresholding, fuzzy k-means (FKM), k-harmonic means (KHM), and reaction-diffusion level set evolution (RD-LSE) algorithms, as well as the tissue tracking algorithm provided by the ViewRay treatment planning and delivery system (VR-TPDS). The performance of the five algorithms was evaluated quantitatively by comparing with the manual segmentation using the Dice coefficient and target registration error (TRE) measured as the distance between the centroid of the manual ROI and the centroid of the automatically segmented ROI. All methods were able to successfully segment the bladder and the kidney, but only FKM, KHM, and VR-TPDS were able to segment the liver tumor and the duodenum. The performance of the thresholding, FKM, KHM, and RD-LSE algorithms degraded as the local image contrast decreased, whereas the performance of the VP-TPDS method was nearly independent of local image contrast due to the reference registration algorithm. For segmenting high-contrast images (i.e., kidney), the thresholding method provided the best speed (<1 ms) with a satisfying accuracy (Dice=0.95). When the image contrast was low, the VR-TPDS method had the best automatic contour. Results suggest an image quality determination procedure before segmentation and a combination of different methods for optimal segmentation with the on-board MR-IGRT system. PACS number(s): 87.57.nm, 87.57.N-, 87.61.Tg. © 2016 The Authors.
A comparative study of automatic image segmentation algorithms for target tracking in MR‐IGRT
Feng, Yuan; Kawrakow, Iwan; Olsen, Jeff; Parikh, Parag J.; Noel, Camille; Wooten, Omar; Du, Dongsu; Mutic, Sasa
2016-01-01
On‐board magnetic resonance (MR) image guidance during radiation therapy offers the potential for more accurate treatment delivery. To utilize the real‐time image information, a crucial prerequisite is the ability to successfully segment and track regions of interest (ROI). The purpose of this work is to evaluate the performance of different segmentation algorithms using motion images (4 frames per second) acquired using a MR image‐guided radiotherapy (MR‐IGRT) system. Manual contours of the kidney, bladder, duodenum, and a liver tumor by an experienced radiation oncologist were used as the ground truth for performance evaluation. Besides the manual segmentation, images were automatically segmented using thresholding, fuzzy k‐means (FKM), k‐harmonic means (KHM), and reaction‐diffusion level set evolution (RD‐LSE) algorithms, as well as the tissue tracking algorithm provided by the ViewRay treatment planning and delivery system (VR‐TPDS). The performance of the five algorithms was evaluated quantitatively by comparing with the manual segmentation using the Dice coefficient and target registration error (TRE) measured as the distance between the centroid of the manual ROI and the centroid of the automatically segmented ROI. All methods were able to successfully segment the bladder and the kidney, but only FKM, KHM, and VR‐TPDS were able to segment the liver tumor and the duodenum. The performance of the thresholding, FKM, KHM, and RD‐LSE algorithms degraded as the local image contrast decreased, whereas the performance of the VP‐TPDS method was nearly independent of local image contrast due to the reference registration algorithm. For segmenting high‐contrast images (i.e., kidney), the thresholding method provided the best speed (<1 ms) with a satisfying accuracy (Dice=0.95). When the image contrast was low, the VR‐TPDS method had the best automatic contour. Results suggest an image quality determination procedure before segmentation and a combination of different methods for optimal segmentation with the on‐board MR‐IGRT system. PACS number(s): 87.57.nm, 87.57.N‐, 87.61.Tg
Line following using a two camera guidance system for a mobile robot
NASA Astrophysics Data System (ADS)
Samu, Tayib; Kelkar, Nikhal; Perdue, David; Ruthemeyer, Michael A.; Matthews, Bradley O.; Hall, Ernest L.
1996-10-01
Automated unmanned guided vehicles have many potential applications in manufacturing, medicine, space and defense. A mobile robot has been designed for the 1996 Automated Unmanned Vehicle Society competition which was held in Orlando, Florida on July 15, 1996. The competition required the vehicle to follow solid and dashed lines around an approximately 800 ft. path while avoiding obstacles, overcoming terrain changes such as inclines and sand traps, and attempting to maximize speed. The purpose of this paper is to describe the algorithm developed for the line following. The line following algorithm images two windows and locates their centroid and with the knowledge that the points are on the ground plane, a mathematical and geometrical relationship between the image coordinates of the points and their corresponding ground coordinates are established. The angle of the line and minimum distance from the robot centroid are then calculated and used in the steering control. Two cameras are mounted on the robot with a camera on each side. One camera guides the robot and when it loses track of the line on its side, the robot control system automatically switches to the other camera. The test bed system has provided an educational experience for all involved and permits understanding and extending the state of the art in autonomous vehicle design.
Qin, Jiahu; Fu, Weiming; Gao, Huijun; Zheng, Wei Xing
2016-03-03
This paper is concerned with developing a distributed k-means algorithm and a distributed fuzzy c-means algorithm for wireless sensor networks (WSNs) where each node is equipped with sensors. The underlying topology of the WSN is supposed to be strongly connected. The consensus algorithm in multiagent consensus theory is utilized to exchange the measurement information of the sensors in WSN. To obtain a faster convergence speed as well as a higher possibility of having the global optimum, a distributed k-means++ algorithm is first proposed to find the initial centroids before executing the distributed k-means algorithm and the distributed fuzzy c-means algorithm. The proposed distributed k-means algorithm is capable of partitioning the data observed by the nodes into measure-dependent groups which have small in-group and large out-group distances, while the proposed distributed fuzzy c-means algorithm is capable of partitioning the data observed by the nodes into different measure-dependent groups with degrees of membership values ranging from 0 to 1. Simulation results show that the proposed distributed algorithms can achieve almost the same results as that given by the centralized clustering algorithms.
NASA Astrophysics Data System (ADS)
Herbonnet, Ricardo; Buddendiek, Axel; Kuijken, Konrad
2017-03-01
Context. Current optical imaging surveys for cosmology cover large areas of sky. Exploiting the statistical power of these surveys for weak lensing measurements requires shape measurement methods with subpercent systematic errors. Aims: We introduce a new weak lensing shear measurement algorithm, shear nulling after PSF Gaussianisation (SNAPG), designed to avoid the noise biases that affect most other methods. Methods: SNAPG operates on images that have been convolved with a kernel that renders the point spread function (PSF) a circular Gaussian, and uses weighted second moments of the sources. The response of such second moments to a shear of the pre-seeing galaxy image can be predicted analytically, allowing us to construct a shear nulling scheme that finds the shear parameters for which the observed galaxies are consistent with an unsheared, isotropically oriented population of sources. The inverse of this nulling shear is then an estimate of the gravitational lensing shear. Results: We identify the uncertainty of the estimated centre of each galaxy as the source of noise bias, and incorporate an approximate estimate of the centroid covariance into the scheme. We test the method on extensive suites of simulated galaxies of increasing complexity, and find that it is capable of shear measurements with multiplicative bias below 0.5 percent.
NASA Technical Reports Server (NTRS)
Mullally, Fergal
2017-01-01
We present an automated method of identifying background eclipsing binaries masquerading as planet candidates in the Kepler planet candidate catalogs. We codify the manual vetting process for Kepler Objects of Interest (KOIs) described in Bryson et al. (2013) with a series of measurements and tests that can be performed algorithmically. We compare our automated results with a sample of manually vetted KOIs from the catalog of Burke et al. (2014) and find excellent agreement. We test the performance on a set of simulated transits and find our algorithm correctly identifies simulated false positives approximately 50 of the time, and correctly identifies 99 of simulated planet candidates.
The effect of time-variant acoustical properties on orchestral instrument timbres
NASA Astrophysics Data System (ADS)
Hajda, John Michael
1999-06-01
The goal of this study was to investigate the timbre of orchestral instrument tones. Kendall (1986) showed that time-variant features are important to instrument categorization. But the relative salience of specific time-variant features to each other and to other acoustical parameters is not known. As part of a convergence strategy, a battery of experiments was conducted to assess the importance of global amplitude envelope, spectral frequencies, and spectral amplitudes. An omnibus identification experiment investigated the salience of global envelope partitions (attack, steady state, and decay). Valid partitioning models should identify important boundary conditions in the evolution of a signal; therefore, these models should be based on signal characteristics. With the use of such a model for sustained continuant tones, the steady-state segment was more salient than the attack. These findings contradicted previous research, which used questionable operational definitions for signal partitioning. For the next set of experiments, instrument tones were analyzed by phase vocoder, and stimuli were created by additive synthesis. Edits and combinations of edits controlled global amplitude envelope, spectral frequencies, and relative spectral amplitudes. Perceptual measurements were made with distance estimation, Verbal Attribute Magnitude Estimation, and similarity scaling. Results indicated that the primary acoustical attribute was the long-time-average spectral centroid. Spectral centroid is a measure of the center of energy distribution for spectral frequency components. Instruments with high values of spectral centroid (bowed strings) sound nasal while instruments with low spectral centroid (flute, clarinet) sound not nasal. The secondary acoustical attribute was spectral amplitude time variance. Predictably, time variance correlated highly with subject ratings of vibrato. The control of relative spectral amplitudes was more salient than the control of global envelope and spectral frequencies. Both amplitude phase relationships and time- variant spectral centroid were affected by the control of relative spectral amplitudes. Further experimentation is required to determine the salience of these features. The finding that instrumental vibrato is a manifestation of spectral amplitude time variance contradicts the common belief that vibrato is due to frequency (pitch) and intensity (loudness) modulation. This study suggests that vibrato is due to a periodic modulation in timbre. Future research should employ musical contexts.
Hartman Testing of X-Ray Telescopes
NASA Technical Reports Server (NTRS)
Saha, Timo T.; Biskasch, Michael; Zhang, William W.
2013-01-01
Hartmann testing of x-ray telescopes is a simple test method to retrieve and analyze alignment errors and low-order circumferential errors of x-ray telescopes and their components. A narrow slit is scanned along the circumference of the telescope in front of the mirror and the centroids of the images are calculated. From the centroid data, alignment errors, radius variation errors, and cone-angle variation errors can be calculated. Mean cone angle, mean radial height (average radius), and the focal length of the telescope can also be estimated if the centroid data is measured at multiple focal plane locations. In this paper we present the basic equations that are used in the analysis process. These equations can be applied to full circumference or segmented x-ray telescopes. We use the Optical Surface Analysis Code (OSAC) to model a segmented x-ray telescope and show that the derived equations and accompanying analysis retrieves the alignment errors and low order circumferential errors accurately.
Segmental Analysis of Cardiac Short-Axis Views Using Lagrangian Radial and Circumferential Strain.
Ma, Chi; Wang, Xiao; Varghese, Tomy
2016-11-01
Accurate description of myocardial deformation in the left ventricle is a three-dimensional problem, requiring three normal strain components along its natural axis, that is, longitudinal, radial, and circumferential strains. Although longitudinal strains are best estimated from long-axis views, radial and circumferential strains are best depicted in short-axis views. An algorithm that utilizes a polar grid for short-axis views previously developed in our laboratory for a Lagrangian description of tissue deformation is utilized for radial and circumferential displacement and strain estimation. Deformation of the myocardial wall, utilizing numerical simulations with ANSYS, and a finite-element analysis-based canine heart model were adapted as the input to a frequency-domain ultrasound simulation program to generate radiofrequency echo signals. Clinical in vivo data were also acquired from a healthy volunteer. Local displacements estimated along and perpendicular to the ultrasound beam propagation direction are then transformed into radial and circumferential displacements and strains using the polar grid based on a pre-determined centroid location. Lagrangian strain variations demonstrate good agreement with the ideal strain when compared with Eulerian results. Lagrangian radial and circumferential strain estimation results are also demonstrated for experimental data on a healthy volunteer. Lagrangian radial and circumferential strain tracking provide accurate results with the assistance of the polar grid, as demonstrated using both numerical simulations and in vivo study. © The Author(s) 2015.
Segmental Analysis of Cardiac Short-Axis Views Using Lagrangian Radial and Circumferential Strain
Ma, Chi; Wang, Xiao; Varghese, Tomy
2016-01-01
Accurate description of myocardial deformation in the left ventricle is a three-dimensional problem, requiring three normal strain components along its natural axis, that is, longitudinal, radial, and circumferential strains. Although longitudinal strains are best estimated from long-axis views, radial and circumferential strains are best depicted in short-axis views. An algorithm that utilizes a polar grid for short-axis views previously developed in our laboratory for a Lagrangian description of tissue deformation is utilized for radial and circumferential displacement and strain estimation. Deformation of the myocardial wall, utilizing numerical simulations with ANSYS, and a finite-element analysis–based canine heart model were adapted as the input to a frequency-domain ultrasound simulation program to generate radiofrequency echo signals. Clinical in vivo data were also acquired from a healthy volunteer. Local displacements estimated along and perpendicular to the ultrasound beam propagation direction are then transformed into radial and circumferential displacements and strains using the polar grid based on a pre-determined centroid location. Lagrangian strain variations demonstrate good agreement with the ideal strain when compared with Eulerian results. Lagrangian radial and circumferential strain estimation results are also demonstrated for experimental data on a healthy volunteer. Lagrangian radial and circumferential strain tracking provide accurate results with the assistance of the polar grid, as demonstrated using both numerical simulations and in vivo study. PMID:26578642
NASA Astrophysics Data System (ADS)
Yakymchuk, C.; Brown, M.; Ivanic, T. J.; Korhonen, F. J.
2013-09-01
The depth to the bottom of the magnetic sources (DBMS) has been estimated from the aeromagnetic data of Central India. The conventional centroid method of DBMS estimation assumes random uniform uncorrelated distribution of sources and to overcome this limitation a modified centroid method based on scaling distribution has been proposed. Shallower values of the DBMS are found for the south western region. The DBMS values are found as low as 22 km in the south west Deccan trap covered regions and as deep as 43 km in the Chhattisgarh Basin. In most of the places DBMS are much shallower than the Moho depth, earlier found from the seismic study and may be representing the thermal/compositional/petrological boundaries. The large variation in the DBMS indicates the complex nature of the Indian crust.
Bruno, Rossella; Alì, Greta; Giannini, Riccardo; Proietti, Agnese; Lucchi, Marco; Chella, Antonio; Melfi, Franca; Mussi, Alfredo; Fontanini, Gabriella
2017-01-10
Malignant pleural mesothelioma (MPM) is a rare asbestos related cancer, aggressive and unresponsive to therapies. Histological examination of pleural lesions is the gold standard of MPM diagnosis, although it is sometimes hard to discriminate the epithelioid type of MPM from benign mesothelial hyperplasia (MH).This work aims to define a new molecular tool for the differential diagnosis of MPM, using the expression profile of 117 genes deregulated in this tumour.The gene expression analysis was performed by nanoString System on tumour tissues from 36 epithelioid MPM and 17 MH patients, and on 14 mesothelial pleural samples analysed in a blind way. Data analysis included raw nanoString data normalization, unsupervised cluster analysis by Pearson correlation, non-parametric Mann Whitney U-test and molecular classification by the Uncorrelated Shrunken Centroid (USC) Algorithm.The Mann-Whitney U-test found 35 genes upregulated and 31 downregulated in MPM. The unsupervised cluster analysis revealed two clusters, one composed only of MPM and one only of MH samples, thus revealing class-specific gene profiles. The Uncorrelated Shrunken Centroid algorithm identified two classifiers, one including 22 genes and the other 40 genes, able to properly classify all the samples as benign or malignant using gene expression data; both classifiers were also able to correctly determine, in a blind analysis, the diagnostic categories of all the 14 unknown samples.In conclusion we delineated a diagnostic tool combining molecular data (gene expression) and computational analysis (USC algorithm), which can be applied in the clinical practice for the differential diagnosis of MPM.
Su, Chun-Kuei; Chiang, Chia-Hsun; Lee, Chia-Ming; Fan, Yu-Pei; Ho, Chiu-Ming; Shyu, Liang-Yu
2013-01-01
Sympathetic nerves conveying central commands to regulate visceral functions often display activities in synchronous bursts. To understand how individual fibers fire synchronously, we establish “oligofiber recording techniques” to record “several” nerve fiber activities simultaneously, using in vitro splanchnic sympathetic nerve–thoracic spinal cord preparations of neonatal rats as experimental models. While distinct spike potentials were easily recorded from collagenase-dissociated sympathetic fibers, a problem arising from synchronous nerve discharges is a higher incidence of complex waveforms resulted from spike overlapping. Because commercial softwares do not provide an explicit solution for spike overlapping, a series of custom-made LabVIEW programs incorporated with MATLAB scripts was therefore written for spike sorting. Spikes were represented as data points after waveform feature extraction and automatically grouped by k-means clustering followed by principal component analysis (PCA) to verify their waveform homogeneity. For dissimilar waveforms with exceeding Hotelling's T2 distances from the cluster centroids, a unique data-based subtraction algorithm (SA) was used to determine if they were the complex waveforms resulted from superimposing a spike pattern close to the cluster centroid with the other signals that could be observed in original recordings. In comparisons with commercial software, higher accuracy was achieved by analyses using our algorithms for the synthetic data that contained synchronous spiking and complex waveforms. Moreover, both T2-selected and SA-retrieved spikes were combined as unit activities. Quantitative analyses were performed to evaluate if unit activities truly originated from single fibers. We conclude that applications of our programs can help to resolve synchronous sympathetic nerve discharges (SND). PMID:24198782
ACOSS Eleven (Active Control of Space Structures)
1984-09-01
spatial integration with thresh- old level and system track threshold level reduction factor. 2.2.3 Track Acquisition In the HRAP/LRTP simulation, input ...in both row and column, however, then the track direction is determined to be diagonal. Also, as with the first * tier, multiple hits are processed...for any system track before thresholding, clustering, and centroiding can produce the next frame to be input to the two tier algorithm. As Figure 2-10
NASA Astrophysics Data System (ADS)
Kubota, T.; Saito, T.; Suzuki, W.; Hino, R.
2017-12-01
When an earthquake occurs in offshore region, ocean bottom pressure gauges (OBP) observe the low-frequency (> 400s) pressure change due to tsunami and also high-frequency (< 200 s) pressure change due to seismic waves (e.g. Filloux 1983; Matsumoto et al. 2012). When the period of the seafloor motion is sufficiently long (> 20 s), the relation between seafloor dynamic pressure change p and seafloor vertical acceleration az is approximately given as p=ρ0h0az (ρ0: seawater density, h0: sea depth) (e.g., Bolshakova et al. 2011; Matsumoto et al.,2012; Saito and Tsushima, 2016, JGR; Saito, 2017, GJI). Based on this relation, it is expected that OBP can be used as vertical accelerometers. If we use OBP deployed in offshore region as seismometer, the station coverage is improved and then the accuracy of the earthquake location is also improved. In this study, we analyzed seismograms together with seafloor dynamic pressure change records to estimate the CMT of the interplate earthquakes occurred at off the coast of Tohoku on 9 March, 2011 (Mw 7.3 and 6.5) (Kubota et al., 2017, EPSL), and discussed the estimation accuracy of the centroid horizontal location. When the dynamic pressure change recorded by OBP is used in addition to the seismograms, the horizontal location of CMT was reliably constrained. The centroid was located in the center of the rupture area estimated by the tsunami inversion analysis (Kubota et al., 2017). These CMTs had reverse-fault mechanisms consistent with the interplate earthquakes and well reproduces the dynamic pressure signals in the OBP records. Meanwhile, when we used only the inland seismometers, the centroids were estimated to be outside the rupture area. This study proved that the dynamic pressure change in OBP records are available as seismic-wave records, which greatly helped to investigate the source process of offshore earthquakes far from the coast.
NASA Astrophysics Data System (ADS)
Kubota, T.; Saito, T.; Suzuki, W.; Hino, R.
2016-12-01
When an earthquake occurs in offshore region, ocean bottom pressure gauges (OBP) observe the low-frequency (> 400s) pressure change due to tsunami and also high-frequency (< 200 s) pressure change due to seismic waves (e.g. Filloux 1983; Matsumoto et al. 2012). When the period of the seafloor motion is sufficiently long (> 20 s), the relation between seafloor dynamic pressure change p and seafloor vertical acceleration az is approximately given as p=ρ0h0az (ρ0: seawater density, h0: sea depth) (e.g., Bolshakova et al. 2011; Matsumoto et al.,2012; Saito and Tsushima, 2016, JGR; Saito, 2017, GJI). Based on this relation, it is expected that OBP can be used as vertical accelerometers. If we use OBP deployed in offshore region as seismometer, the station coverage is improved and then the accuracy of the earthquake location is also improved. In this study, we analyzed seismograms together with seafloor dynamic pressure change records to estimate the CMT of the interplate earthquakes occurred at off the coast of Tohoku on 9 March, 2011 (Mw 7.3 and 6.5) (Kubota et al., 2017, EPSL), and discussed the estimation accuracy of the centroid horizontal location. When the dynamic pressure change recorded by OBP is used in addition to the seismograms, the horizontal location of CMT was reliably constrained. The centroid was located in the center of the rupture area estimated by the tsunami inversion analysis (Kubota et al., 2017). These CMTs had reverse-fault mechanisms consistent with the interplate earthquakes and well reproduces the dynamic pressure signals in the OBP records. Meanwhile, when we used only the inland seismometers, the centroids were estimated to be outside the rupture area. This study proved that the dynamic pressure change in OBP records are available as seismic-wave records, which greatly helped to investigate the source process of offshore earthquakes far from the coast.
NASA Technical Reports Server (NTRS)
Mace, Gerald G.; Ackerman, Thomas P.
1996-01-01
A topic of current practical interest is the accurate characterization of the synoptic-scale atmospheric state from wind profiler and radiosonde network observations. We have examined several related and commonly applied objective analysis techniques for performing this characterization and considered their associated level of uncertainty both from a theoretical and a practical standpoint. A case study is presented where two wind profiler triangles with nearly identical centroids and no common vertices produced strikingly different results during a 43-h period. We conclude that the uncertainty in objectively analyzed quantities can easily be as large as the expected synoptic-scale signal. In order to quantify the statistical precision of the algorithms, we conducted a realistic observing system simulation experiment using output from a mesoscale model. A simple parameterization for estimating the uncertainty in horizontal gradient quantities in terms of known errors in the objectively analyzed wind components and temperature is developed from these results.
Yong, Yan Ling; Tan, Li Kuo; McLaughlin, Robert A; Chee, Kok Han; Liew, Yih Miin
2017-12-01
Intravascular optical coherence tomography (OCT) is an optical imaging modality commonly used in the assessment of coronary artery diseases during percutaneous coronary intervention. Manual segmentation to assess luminal stenosis from OCT pullback scans is challenging and time consuming. We propose a linear-regression convolutional neural network to automatically perform vessel lumen segmentation, parameterized in terms of radial distances from the catheter centroid in polar space. Benchmarked against gold-standard manual segmentation, our proposed algorithm achieves average locational accuracy of the vessel wall of 22 microns, and 0.985 and 0.970 in Dice coefficient and Jaccard similarity index, respectively. The average absolute error of luminal area estimation is 1.38%. The processing rate is 40.6 ms per image, suggesting the potential to be incorporated into a clinical workflow and to provide quantitative assessment of vessel lumen in an intraoperative time frame. (2017) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE).
GDPC: Gravitation-based Density Peaks Clustering algorithm
NASA Astrophysics Data System (ADS)
Jiang, Jianhua; Hao, Dehao; Chen, Yujun; Parmar, Milan; Li, Keqin
2018-07-01
The Density Peaks Clustering algorithm, which we refer to as DPC, is a novel and efficient density-based clustering approach, and it is published in Science in 2014. The DPC has advantages of discovering clusters with varying sizes and varying densities, but has some limitations of detecting the number of clusters and identifying anomalies. We develop an enhanced algorithm with an alternative decision graph based on gravitation theory and nearby distance to identify centroids and anomalies accurately. We apply our method to some UCI and synthetic data sets. We report comparative clustering performances using F-Measure and 2-dimensional vision. We also compare our method to other clustering algorithms, such as K-Means, Affinity Propagation (AP) and DPC. We present F-Measure scores and clustering accuracies of our GDPC algorithm compared to K-Means, AP and DPC on different data sets. We show that the GDPC has the superior performance in its capability of: (1) detecting the number of clusters obviously; (2) aggregating clusters with varying sizes, varying densities efficiently; (3) identifying anomalies accurately.
Liu, Tao; Thibos, Larry; Marin, Gildas; Hernandez, Martha
2014-01-01
Conventional aberration analysis by a Shack-Hartmann aberrometer is based on the implicit assumption that an injected probe beam reflects from a single fundus layer. In fact, the biological fundus is a thick reflector and therefore conventional analysis may produce errors of unknown magnitude. We developed a novel computational method to investigate this potential failure of conventional analysis. The Shack-Hartmann wavefront sensor was simulated by computer software and used to recover by two methods the known wavefront aberrations expected from a population of normally-aberrated human eyes and bi-layer fundus reflection. The conventional method determines the centroid of each spot in the SH data image, from which wavefront slopes are computed for least-squares fitting with derivatives of Zernike polynomials. The novel 'global' method iteratively adjusted the aberration coefficients derived from conventional centroid analysis until the SH image, when treated as a unitary picture, optimally matched the original data image. Both methods recovered higher order aberrations accurately and precisely, but only the global algorithm correctly recovered the defocus coefficients associated with each layer of fundus reflection. The global algorithm accurately recovered Zernike coefficients for mean defocus and bi-layer separation with maximum error <0.1%. The global algorithm was robust for bi-layer separation up to 2 dioptres for a typical SH wavefront sensor design. For 100 randomly generated test wavefronts with 0.7 D axial separation, the retrieved mean axial separation was 0.70 D with standard deviations (S.D.) of 0.002 D. Sufficient information is contained in SH data images to measure the dioptric thickness of dual-layer fundus reflection. The global algorithm is superior since it successfully recovered the focus value associated with both fundus layers even when their separation was too small to produce clearly separated spots, while the conventional analysis misrepresents the defocus component of the wavefront aberration as the mean defocus for the two reflectors. Our novel global algorithm is a promising method for SH data image analysis in clinical and visual optics research for human and animal eyes. © 2013 The Authors Ophthalmic & Physiological Optics © 2013 The College of Optometrists.
Research and implementation of finger-vein recognition algorithm
NASA Astrophysics Data System (ADS)
Pang, Zengyao; Yang, Jie; Chen, Yilei; Liu, Yin
2017-06-01
In finger vein image preprocessing, finger angle correction and ROI extraction are important parts of the system. In this paper, we propose an angle correction algorithm based on the centroid of the vein image, and extract the ROI region according to the bidirectional gray projection method. Inspired by the fact that features in those vein areas have similar appearance as valleys, a novel method was proposed to extract center and width of palm vein based on multi-directional gradients, which is easy-computing, quick and stable. On this basis, an encoding method was designed to determine the gray value distribution of texture image. This algorithm could effectively overcome the edge of the texture extraction error. Finally, the system was equipped with higher robustness and recognition accuracy by utilizing fuzzy threshold determination and global gray value matching algorithm. Experimental results on pairs of matched palm images show that, the proposed method has a EER with 3.21% extracts features at the speed of 27ms per image. It can be concluded that the proposed algorithm has obvious advantages in grain extraction efficiency, matching accuracy and algorithm efficiency.
SAIL: Summation-bAsed Incremental Learning for Information-Theoretic Text Clustering.
Cao, Jie; Wu, Zhiang; Wu, Junjie; Xiong, Hui
2013-04-01
Information-theoretic clustering aims to exploit information-theoretic measures as the clustering criteria. A common practice on this topic is the so-called Info-Kmeans, which performs K-means clustering with KL-divergence as the proximity function. While expert efforts on Info-Kmeans have shown promising results, a remaining challenge is to deal with high-dimensional sparse data such as text corpora. Indeed, it is possible that the centroids contain many zero-value features for high-dimensional text vectors, which leads to infinite KL-divergence values and creates a dilemma in assigning objects to centroids during the iteration process of Info-Kmeans. To meet this challenge, in this paper, we propose a Summation-bAsed Incremental Learning (SAIL) algorithm for Info-Kmeans clustering. Specifically, by using an equivalent objective function, SAIL replaces the computation of KL-divergence by the incremental computation of Shannon entropy. This can avoid the zero-feature dilemma caused by the use of KL-divergence. To improve the clustering quality, we further introduce the variable neighborhood search scheme and propose the V-SAIL algorithm, which is then accelerated by a multithreaded scheme in PV-SAIL. Our experimental results on various real-world text collections have shown that, with SAIL as a booster, the clustering performance of Info-Kmeans can be significantly improved. Also, V-SAIL and PV-SAIL indeed help improve the clustering quality at a lower cost of computation.
Timmerman, Marieke E; Ceulemans, Eva; De Roover, Kim; Van Leeuwen, Karla
2013-12-01
To achieve an insightful clustering of multivariate data, we propose subspace K-means. Its central idea is to model the centroids and cluster residuals in reduced spaces, which allows for dealing with a wide range of cluster types and yields rich interpretations of the clusters. We review the existing related clustering methods, including deterministic, stochastic, and unsupervised learning approaches. To evaluate subspace K-means, we performed a comparative simulation study, in which we manipulated the overlap of subspaces, the between-cluster variance, and the error variance. The study shows that the subspace K-means algorithm is sensitive to local minima but that the problem can be reasonably dealt with by using partitions of various cluster procedures as a starting point for the algorithm. Subspace K-means performs very well in recovering the true clustering across all conditions considered and appears to be superior to its competitor methods: K-means, reduced K-means, factorial K-means, mixtures of factor analyzers (MFA), and MCLUST. The best competitor method, MFA, showed a performance similar to that of subspace K-means in easy conditions but deteriorated in more difficult ones. Using data from a study on parental behavior, we show that subspace K-means analysis provides a rich insight into the cluster characteristics, in terms of both the relative positions of the clusters (via the centroids) and the shape of the clusters (via the within-cluster residuals).
The Long-Wave Infrared Earth Image as a Pointing Reference for Deep-Space Optical Communications
NASA Astrophysics Data System (ADS)
Biswas, A.; Piazzolla, S.; Peterson, G.; Ortiz, G. G.; Hemmati, H.
2006-11-01
Optical communications from space require an absolute pointing reference. Whereas at near-Earth and even planetary distances out to Mars and Jupiter a laser beacon transmitted from Earth can serve as such a pointing reference, for farther distances extending to the outer reaches of the solar system, the means for meeting this requirement remains an open issue. We discuss in this article the prospects and consequences of utilizing the Earth image sensed in the long-wave infrared (LWIR) spectral band as a beacon to satisfy the absolute pointing requirements. We have used data from satellite-based thermal measurements of Earth to synthesize images at various ranges and have shown the centroiding accuracies that can be achieved with prospective LWIR image sensing arrays. The nonuniform emissivity of Earth causes a mispointing bias error term that exceeds a provisional pointing budget allocation when using simple centroiding algorithms. Other issues related to implementing thermal imaging of Earth from deep space for the purposes of providing a pointing reference are also reported.
Numerical evaluation of a single ellipsoid motion in Newtonian and power-law fluids
NASA Astrophysics Data System (ADS)
Férec, Julien; Ausias, Gilles; Natale, Giovanniantonio
2018-05-01
A computational model is developed for simulating the motion of a single ellipsoid suspended in a Newtonian and power-law fluid, respectively. Based on a finite element method (FEM), the approach consists in seeking solutions for the linear and angular particle velocities using a minimization algorithm, such that the net hydrodynamic force and torque acting on the ellipsoid are zero. For a Newtonian fluid subjected to a simple shear flow, the Jeffery's predictions are recovered at any aspect ratios. The motion of a single ellipsoidal fiber is found to be slightly disturbed by the shear-thinning character of the suspending fluid, when compared with the Jeffery's solutions. Surprisingly, the perturbation can be completely neglected for a particle with a large aspect ratio. Furthermore, the particle centroid is also found to translate with the same linear velocity as the undisturbed simple shear flow evaluated at particle centroid. This is confirmed by recent works based on experimental investigations and modeling approach (1-2).
Comparison of computer versus manual determination of pulmonary nodule volumes in CT scans
NASA Astrophysics Data System (ADS)
Biancardi, Alberto M.; Reeves, Anthony P.; Jirapatnakul, Artit C.; Apanasovitch, Tatiyana; Yankelevitz, David; Henschke, Claudia I.
2008-03-01
Accurate nodule volume estimation is necessary in order to estimate the clinically relevant growth rate or change in size over time. An automated nodule volume-measuring algorithm was applied to a set of pulmonary nodules that were documented by the Lung Image Database Consortium (LIDC). The LIDC process model specifies that each scan is assessed by four experienced thoracic radiologists and that boundaries are to be marked around the visible extent of the nodules for nodules 3 mm and larger. Nodules were selected from the LIDC database with the following inclusion criteria: (a) they must have a solid component on a minimum of three CT image slices and (b) they must be marked by all four LIDC radiologists. A total of 113 nodules met the selection criterion with diameters ranging from 3.59 mm to 32.68 mm (mean 9.37 mm, median 7.67 mm). The centroid of each marked nodule was used as the seed point for the automated algorithm. 95 nodules (84.1%) were correctly segmented, but one was considered not meeting the first selection criterion by the automated method; for the remaining ones, eight (7.1%) were structurally too complex or extensively attached and 10 (8.8%) were considered not properly segmented after a simple visual inspection by a radiologist. Since the LIDC specifications, as aforementioned, instruct radiologists to include both solid and sub-solid parts, the automated method core capability of segmenting solid tissues was augmented to take into account also the nodule sub-solid parts. We ranked the distances of the automated method estimates and the radiologist-based estimates from the median of the radiologist-based values. The automated method was in 76.6% of the cases closer to the median than at least one of the values derived from the manual markings, which is a sign of a very good agreement with the radiologists' markings.
NASA Astrophysics Data System (ADS)
Reed, Judd E.; Rumberger, John A.; Buithieu, Jean; Behrenbeck, Thomas; Breen, Jerome F.; Sheedy, Patrick F., II
1995-05-01
Following myocardial infarction, the size of the infarcted region and the systolic functioning of the noninfarcted region are commonly assessed by various cross- sectional imaging techniques. A series of images representing successive phases of the cardiac cycle can be acquired by several imaging modalities including electron beam computed tomography, magnetic resonance imaging, and echocardiography. For the assessment of patterns of ventricular contraction, images are commonly acquired of ventricular cross-sections normal to the 'long' axis of the heart and parallel to the mitral valve plane. The endocardial and epicardial surfaces of the myocardium are identified. Then the ventricle is divided into sectors and the volumes of blood and myocardium within each sector at multiple phases of the cardiac cycle are measured. Regional function parameters are derived from these measurements. This generally mandates the use of a polar or cylindrical coordinate system. Various algorithms have been used to select the origin of this coordinate system. These include the centroid of the endocardial surface, the epicardial surface, or of a polygon whose vertices lie midway between the epicardial and endocardial surfaces of the myocardium (centerline method). Another algorithm has been developed in our laboratory. This uses the centroid (or center of mass) of the myocardium exclusive of the ventricular cavity. Each of these choices for origin of coordinate system can be derived from the end- diastolic image or from the end-systolic image. Alternately, new coordinate systems can be selected for each phase of the cardiac cycle. These are referred to as 'floating' coordinate systems. A series of computer models have been developed in our laboratory to study the effects of each of these choices on the regional function parameters of normal ventricles and how these choices effect the quantification of regional abnormalities after myocardial infarction. The most sophisticated of these is an interactive program with a graphical user interface which facilitates the simulation of a wide variety of dynamic ventricular cross sections. Analysis of these simulations has led to a better understanding of how polar coordinate system placement influences the results of quantitative regional ventricular function assessment. It has also created new insight into how the appropriateness of the placement of such a polar coordinate systems can be objectively assessed. The validity of the conclusions drawn from the analysis of simulated ventricular shapes was validated through the analysis of outlines extracted from cine electron beam computed tomographic images. This was done using another interactive software tool developed specifically for this purpose. With this tool, the effects on regional function parameters of various choices for origin placement can be directly observed. This has proven to reinforce the conclusions drawn from the simulations and has led to the modification of the procedures used in our laboratory. Conclusions: The so-called floating coordinate systems are superior to fixed ones for quantification of regional left ventricular contraction in almost every respect. The use of regional ejection fractions with a coordinate system origin located at the centroid of the endocardial surface can lead to 180 degree errors in identifying the location of a myocardial infarction. This problem is less pronounced with midline and epicardium- based centroids and does not occur when the centroid of the myocardium is used. The quantified migration of myocardial mass across sector boundaries is a useful indicator of an inappropriate choice of coordinate system origin. When the centroid of the myocardium falls well within the ventricular cavity, as it usually does, it is a better location for the origin for regional analysis than any of the other centroids analyzed.
Fidelity of the ensemble code for visual motion in primate retina.
Frechette, E S; Sher, A; Grivich, M I; Petrusca, D; Litke, A M; Chichilnisky, E J
2005-07-01
Sensory experience typically depends on the ensemble activity of hundreds or thousands of neurons, but little is known about how populations of neurons faithfully encode behaviorally important sensory information. We examined how precisely speed of movement is encoded in the population activity of magnocellular-projecting parasol retinal ganglion cells (RGCs) in macaque monkey retina. Multi-electrode recordings were used to measure the activity of approximately 100 parasol RGCs simultaneously in isolated retinas stimulated with moving bars. To examine how faithfully the retina signals motion, stimulus speed was estimated directly from recorded RGC responses using an optimized algorithm that resembles models of motion sensing in the brain. RGC population activity encoded speed with a precision of approximately 1%. The elementary motion signal was conveyed in approximately 10 ms, comparable to the interspike interval. Temporal structure in spike trains provided more precise speed estimates than time-varying firing rates. Correlated activity between RGCs had little effect on speed estimates. The spatial dispersion of RGC receptive fields along the axis of motion influenced speed estimates more strongly than along the orthogonal direction, as predicted by a simple model based on RGC response time variability and optimal pooling. on and off cells encoded speed with similar and statistically independent variability. Simulation of downstream speed estimation using populations of speed-tuned units showed that peak (winner take all) readout provided more precise speed estimates than centroid (vector average) readout. These findings reveal how faithfully the retinal population code conveys information about stimulus speed and the consequences for motion sensing in the brain.
NASA Astrophysics Data System (ADS)
Yamada, Yoshiyuki; Gouda, Naoteru; Yoshioka, Satoshi
2015-08-01
We are planning JASMINE (Japan Astrometric Satellite Mission for INfrared Exploration) as a series missions of Nano-JASMINE, Small-JASMINE, and JASMINE. Nano-JASMINE data analysis will be performed as a collaboration with Gaia data analysis team. We apply Gaia core processing software named AGIS as a Nano-JASMINE core solution. Applicability has been confirmed by D. Michalik and Gaia DPAC team. Converting telemetry data to AGIS input is a JASMINE team's task. It includes centroid caoculatoin of the stellar image. Accuracy of Gaia is two-order better than that of Nano-JASMINE. But there are only two astrometric satellite missions with CCD detector for global astrometry. So, Nano-JASMINE will have role of calibrating Gaia data. Bright star centroiding is the most important science target.Small-JASMINE has completely different observation strategy. It will observe step stair observation with about a million observations for individual star. Sub milli arcsec centroid errors of individual steallar images will be reduced by two order and getting 10 micro arcsecond astrometric accuracy by applying square root N law of million observations. Various systematic noise should be estimated, modelled, and subtracted. Some statistical study will be shown in this poster.
Monahan, William B; Tingley, Morgan W
2012-01-01
The ability of species to respond to novel future climates is determined in part by their physiological capacity to tolerate climate change and the degree to which they have reached and continue to maintain distributional equilibrium with the environment. While broad-scale correlative climatic measurements of a species' niche are often described as estimating the fundamental niche, it is unclear how well these occupied portions actually approximate the fundamental niche per se, versus the fundamental niche that exists in environmental space, and what fitness values bounding the niche are necessary to maintain distributional equilibrium. Here, we investigate these questions by comparing physiological and correlative estimates of the thermal niche in the introduced North American house sparrow (Passer domesticus). Our results indicate that occupied portions of the fundamental niche derived from temperature correlations closely approximate the centroid of the existing fundamental niche calculated on a fitness threshold of 50% population mortality. Using these niche measures, a 75-year time series analysis (1930-2004) further shows that: (i) existing fundamental and occupied niche centroids did not undergo directional change, (ii) interannual changes in the two niche centroids were correlated, (iii) temperatures in North America moved through niche space in a net centripetal fashion, and consequently, (iv) most areas throughout the range of the house sparrow tracked the existing fundamental niche centroid with respect to at least one temperature gradient. Following introduction to a new continent, the house sparrow rapidly tracked its thermal niche and established continent-wide distributional equilibrium with respect to major temperature gradients. These dynamics were mediated in large part by the species' broad thermal physiological tolerances, high dispersal potential, competitive advantage in human-dominated landscapes, and climatically induced changes to the realized environmental space. Such insights may be used to conceptualize mechanistic climatic niche models in birds and other taxa.
NASA Astrophysics Data System (ADS)
Fotin, Sergei V.; Yin, Yin; Periaswamy, Senthil; Kunz, Justin; Haldankar, Hrishikesh; Muradyan, Naira; Cornud, François; Turkbey, Baris; Choyke, Peter L.
2012-02-01
Fully automated prostate segmentation helps to address several problems in prostate cancer diagnosis and treatment: it can assist in objective evaluation of multiparametric MR imagery, provides a prostate contour for MR-ultrasound (or CT) image fusion for computer-assisted image-guided biopsy or therapy planning, may facilitate reporting and enables direct prostate volume calculation. Among the challenges in automated analysis of MR images of the prostate are the variations of overall image intensities across scanners, the presence of nonuniform multiplicative bias field within scans and differences in acquisition setup. Furthermore, images acquired with the presence of an endorectal coil suffer from localized high-intensity artifacts at the posterior part of the prostate. In this work, a three-dimensional method for fast automated prostate detection based on normalized gradient fields cross-correlation, insensitive to intensity variations and coil-induced artifacts, is presented and evaluated. The components of the method, offline template learning and the localization algorithm, are described in detail. The method was validated on a dataset of 522 T2-weighted MR images acquired at the National Cancer Institute, USA that was split in two halves for development and testing. In addition, second dataset of 29 MR exams from Centre d'Imagerie Médicale Tourville, France were used to test the algorithm. The 95% confidence intervals for the mean Euclidean distance between automatically and manually identified prostate centroids were 4.06 +/- 0.33 mm and 3.10 +/- 0.43 mm for the first and second test datasets respectively. Moreover, the algorithm provided the centroid within the true prostate volume in 100% of images from both datasets. Obtained results demonstrate high utility of the detection method for a fully automated prostate segmentation.
Surface sampling techniques for 3D object inspection
NASA Astrophysics Data System (ADS)
Shih, Chihhsiong S.; Gerhardt, Lester A.
1995-03-01
While the uniform sampling method is quite popular for pointwise measurement of manufactured parts, this paper proposes three novel sampling strategies which emphasize 3D non-uniform inspection capability. They are: (a) the adaptive sampling, (b) the local adjustment sampling, and (c) the finite element centroid sampling techniques. The adaptive sampling strategy is based on a recursive surface subdivision process. Two different approaches are described for this adaptive sampling strategy. One uses triangle patches while the other uses rectangle patches. Several real world objects were tested using these two algorithms. Preliminary results show that sample points are distributed more closely around edges, corners, and vertices as desired for many classes of objects. Adaptive sampling using triangle patches is shown to generally perform better than both uniform and adaptive sampling using rectangle patches. The local adjustment sampling strategy uses a set of predefined starting points and then finds the local optimum position of each nodal point. This method approximates the object by moving the points toward object edges and corners. In a hybrid approach, uniform points sets and non-uniform points sets, first preprocessed by the adaptive sampling algorithm on a real world object were then tested using the local adjustment sampling method. The results show that the initial point sets when preprocessed by adaptive sampling using triangle patches, are moved the least amount of distance by the subsequently applied local adjustment method, again showing the superiority of this method. The finite element sampling technique samples the centroids of the surface triangle meshes produced from the finite element method. The performance of this algorithm was compared to that of the adaptive sampling using triangular patches. The adaptive sampling with triangular patches was once again shown to be better on different classes of objects.
Fong, Simon; Deb, Suash; Yang, Xin-She; Zhuang, Yan
2014-01-01
Traditional K-means clustering algorithms have the drawback of getting stuck at local optima that depend on the random values of initial centroids. Optimization algorithms have their advantages in guiding iterative computation to search for global optima while avoiding local optima. The algorithms help speed up the clustering process by converging into a global optimum early with multiple search agents in action. Inspired by nature, some contemporary optimization algorithms which include Ant, Bat, Cuckoo, Firefly, and Wolf search algorithms mimic the swarming behavior allowing them to cooperatively steer towards an optimal objective within a reasonable time. It is known that these so-called nature-inspired optimization algorithms have their own characteristics as well as pros and cons in different applications. When these algorithms are combined with K-means clustering mechanism for the sake of enhancing its clustering quality by avoiding local optima and finding global optima, the new hybrids are anticipated to produce unprecedented performance. In this paper, we report the results of our evaluation experiments on the integration of nature-inspired optimization methods into K-means algorithms. In addition to the standard evaluation metrics in evaluating clustering quality, the extended K-means algorithms that are empowered by nature-inspired optimization methods are applied on image segmentation as a case study of application scenario.
Deb, Suash; Yang, Xin-She
2014-01-01
Traditional K-means clustering algorithms have the drawback of getting stuck at local optima that depend on the random values of initial centroids. Optimization algorithms have their advantages in guiding iterative computation to search for global optima while avoiding local optima. The algorithms help speed up the clustering process by converging into a global optimum early with multiple search agents in action. Inspired by nature, some contemporary optimization algorithms which include Ant, Bat, Cuckoo, Firefly, and Wolf search algorithms mimic the swarming behavior allowing them to cooperatively steer towards an optimal objective within a reasonable time. It is known that these so-called nature-inspired optimization algorithms have their own characteristics as well as pros and cons in different applications. When these algorithms are combined with K-means clustering mechanism for the sake of enhancing its clustering quality by avoiding local optima and finding global optima, the new hybrids are anticipated to produce unprecedented performance. In this paper, we report the results of our evaluation experiments on the integration of nature-inspired optimization methods into K-means algorithms. In addition to the standard evaluation metrics in evaluating clustering quality, the extended K-means algorithms that are empowered by nature-inspired optimization methods are applied on image segmentation as a case study of application scenario. PMID:25202730
Interferometric superlocalization of two incoherent optical point sources.
Nair, Ranjith; Tsang, Mankei
2016-02-22
A novel interferometric method - SLIVER (Super Localization by Image inVERsion interferometry) - is proposed for estimating the separation of two incoherent point sources with a mean squared error that does not deteriorate as the sources are brought closer. The essential component of the interferometer is an image inversion device that inverts the field in the transverse plane about the optical axis, assumed to pass through the centroid of the sources. The performance of the device is analyzed using the Cramér-Rao bound applied to the statistics of spatially-unresolved photon counting using photon number-resolving and on-off detectors. The analysis is supported by Monte-Carlo simulations of the maximum likelihood estimator for the source separation, demonstrating the superlocalization effect for separations well below that set by the Rayleigh criterion. Simulations indicating the robustness of SLIVER to mismatch between the optical axis and the centroid are also presented. The results are valid for any imaging system with a circularly symmetric point-spread function.
Mapping Urban Risk: Flood Hazards, Race, & Environmental Justice In New York”
Maantay, Juliana; Maroko, Andrew
2009-01-01
This paper demonstrates the importance of disaggregating population data aggregated by census tracts or other units, for more realistic population distribution/location. A newly-developed mapping method, the Cadastral-based Expert Dasymetric System (CEDS), calculates population in hyper-heterogeneous urban areas better than traditional mapping techniques. A case study estimating population potentially impacted by flood hazard in New York City compares the impacted population determined by CEDS with that derived by centroid-containment method and filtered areal weighting interpolation. Compared to CEDS, 37 percent and 72 percent fewer people are estimated to be at risk from floods city-wide, using conventional areal weighting of census data, and centroid-containment selection, respectively. Undercounting of impacted population could have serious implications for emergency management and disaster planning. Ethnic/racial populations are also spatially disaggregated to determine any environmental justice impacts with flood risk. Minorities are disproportionately undercounted using traditional methods. Underestimating more vulnerable sub-populations impairs preparedness and relief efforts. PMID:20047020
Anderson, Kyle; Segall, Paul
2013-01-01
Physics-based models of volcanic eruptions can directly link magmatic processes with diverse, time-varying geophysical observations, and when used in an inverse procedure make it possible to bring all available information to bear on estimating properties of the volcanic system. We develop a technique for inverting geodetic, extrusive flux, and other types of data using a physics-based model of an effusive silicic volcanic eruption to estimate the geometry, pressure, depth, and volatile content of a magma chamber, and properties of the conduit linking the chamber to the surface. A Bayesian inverse formulation makes it possible to easily incorporate independent information into the inversion, such as petrologic estimates of melt water content, and yields probabilistic estimates for model parameters and other properties of the volcano. Probability distributions are sampled using a Markov-Chain Monte Carlo algorithm. We apply the technique using GPS and extrusion data from the 2004–2008 eruption of Mount St. Helens. In contrast to more traditional inversions such as those involving geodetic data alone in combination with kinematic forward models, this technique is able to provide constraint on properties of the magma, including its volatile content, and on the absolute volume and pressure of the magma chamber. Results suggest a large chamber of >40 km3 with a centroid depth of 11–18 km and a dissolved water content at the top of the chamber of 2.6–4.9 wt%.
2010-08-01
astigmatism and other sources, and stay constant from time to time (LC Technologies, 2000). Systematic errors can sometimes reach many degrees of visual angle...Taking the average of all disparities would mean treating each as equally important regardless of whether they are from correct or incorrect mappings. In...likely stop somewhere near the centroid because the large hM basically treats every point equally (or nearly equally if using the multivariate
Centroid-Based Document Classification Algorithms: Analysis & Experimental Results
2000-03-06
stories such as baseball, football , basketball, and Olympics. In the first category, most of the documents contain words Clinton and Lewinsky and hence...document. On the other hand, any of sports related words like baseball, football , and basketball appearing in a document will put the document in the...0.15 diseas 0.14 women 0.13 heart 0.12 drug 4 0.41 newspap 0.22 editor 0.19 advertis 0.14 media 0.13 peruvian 0.13 coverag 0.12 percent 0.12 journalist
How to control if even experts are not sure: Robust fuzzy control
NASA Technical Reports Server (NTRS)
Nguyen, Hung T.; Kreinovich, Vladik YA.; Lea, Robert; Tolbert, Dana
1992-01-01
In real life, the degrees of certainty that correspond to one of the same expert can differ drastically, and fuzzy control algorithms translate these different degrees of uncertainty into different control strategies. In such situations, it is reasonable to choose a fuzzy control methodology that is the least vulnerable to this kind of uncertainty. It is shown that this 'robustness' demand leads to min and max for &- and V-operations, to 1-x for negation, and to centroid as a defuzzification procedure.
Nation-Building Modeling and Resource Allocation Via Dynamic Programming
2014-09-01
Figure 2. RAND Study Models[59:98,115] (WMA) and used both the k-Nearest Neighbor ( KNN ) and Nearest Centroid (NC) algorithms to classify future features...The study found that KNN performed bet- ter than NC with 85% or greater accuracy in all test cases. The methodology was adopted for use under the...analysis feature of the model. 3.7.1 The No Surge Alternative. On the 10th of January 2007, President George W. Bush delivered a speech to the American
An improved algorithm of laser spot center detection in strong noise background
NASA Astrophysics Data System (ADS)
Zhang, Le; Wang, Qianqian; Cui, Xutai; Zhao, Yu; Peng, Zhong
2018-01-01
Laser spot center detection is demanded in many applications. The common algorithms for laser spot center detection such as centroid and Hough transform method have poor anti-interference ability and low detection accuracy in the condition of strong background noise. In this paper, firstly, the median filtering was used to remove the noise while preserving the edge details of the image. Secondly, the binarization of the laser facula image was carried out to extract target image from background. Then the morphological filtering was performed to eliminate the noise points inside and outside the spot. At last, the edge of pretreated facula image was extracted and the laser spot center was obtained by using the circle fitting method. In the foundation of the circle fitting algorithm, the improved algorithm added median filtering, morphological filtering and other processing methods. This method could effectively filter background noise through theoretical analysis and experimental verification, which enhanced the anti-interference ability of laser spot center detection and also improved the detection accuracy.
Hidden Semi-Markov Models and Their Application
NASA Astrophysics Data System (ADS)
Beyreuther, M.; Wassermann, J.
2008-12-01
In the framework of detection and classification of seismic signals there are several different approaches. Our choice for a more robust detection and classification algorithm is to adopt Hidden Markov Models (HMM), a technique showing major success in speech recognition. HMM provide a powerful tool to describe highly variable time series based on a double stochastic model and therefore allow for a broader class description than e.g. template based pattern matching techniques. Being a fully probabilistic model, HMM directly provide a confidence measure of an estimated classification. Furthermore and in contrast to classic artificial neuronal networks or support vector machines, HMM are incorporating the time dependence explicitly in the models thus providing a adequate representation of the seismic signal. As the majority of detection algorithms, HMM are not based on the time and amplitude dependent seismogram itself but on features estimated from the seismogram which characterize the different classes. Features, or in other words characteristic functions, are e.g. the sonogram bands, instantaneous frequency, instantaneous bandwidth or centroid time. In this study we apply continuous Hidden Semi-Markov Models (HSMM), an extension of continuous HMM. The duration probability of a HMM is an exponentially decaying function of the time, which is not a realistic representation of the duration of an earthquake. In contrast HSMM use Gaussians as duration probabilities, which results in an more adequate model. The HSMM detection and classification system is running online as an EARTHWORM module at the Bavarian Earthquake Service. Here the signals that are to be classified simply differ in epicentral distance. This makes it possible to easily decide whether a classification is correct or wrong and thus allows to better evaluate the advantages and disadvantages of the proposed algorithm. The evaluation is based on several month long continuous data and the results are additionally compared to the previously published discrete HMM, continuous HMM and a classic STA/LTA. The intermediate evaluation results are very promising.
A novel harmony search-K means hybrid algorithm for clustering gene expression data
Nazeer, KA Abdul; Sebastian, MP; Kumar, SD Madhu
2013-01-01
Recent progress in bioinformatics research has led to the accumulation of huge quantities of biological data at various data sources. The DNA microarray technology makes it possible to simultaneously analyze large number of genes across different samples. Clustering of microarray data can reveal the hidden gene expression patterns from large quantities of expression data that in turn offers tremendous possibilities in functional genomics, comparative genomics, disease diagnosis and drug development. The k- ¬means clustering algorithm is widely used for many practical applications. But the original k-¬means algorithm has several drawbacks. It is computationally expensive and generates locally optimal solutions based on the random choice of the initial centroids. Several methods have been proposed in the literature for improving the performance of the k-¬means algorithm. A meta-heuristic optimization algorithm named harmony search helps find out near-global optimal solutions by searching the entire solution space. Low clustering accuracy of the existing algorithms limits their use in many crucial applications of life sciences. In this paper we propose a novel Harmony Search-K means Hybrid (HSKH) algorithm for clustering the gene expression data. Experimental results show that the proposed algorithm produces clusters with better accuracy in comparison with the existing algorithms. PMID:23390351
A novel harmony search-K means hybrid algorithm for clustering gene expression data.
Nazeer, Ka Abdul; Sebastian, Mp; Kumar, Sd Madhu
2013-01-01
Recent progress in bioinformatics research has led to the accumulation of huge quantities of biological data at various data sources. The DNA microarray technology makes it possible to simultaneously analyze large number of genes across different samples. Clustering of microarray data can reveal the hidden gene expression patterns from large quantities of expression data that in turn offers tremendous possibilities in functional genomics, comparative genomics, disease diagnosis and drug development. The k- ¬means clustering algorithm is widely used for many practical applications. But the original k-¬means algorithm has several drawbacks. It is computationally expensive and generates locally optimal solutions based on the random choice of the initial centroids. Several methods have been proposed in the literature for improving the performance of the k-¬means algorithm. A meta-heuristic optimization algorithm named harmony search helps find out near-global optimal solutions by searching the entire solution space. Low clustering accuracy of the existing algorithms limits their use in many crucial applications of life sciences. In this paper we propose a novel Harmony Search-K means Hybrid (HSKH) algorithm for clustering the gene expression data. Experimental results show that the proposed algorithm produces clusters with better accuracy in comparison with the existing algorithms.
Transverse oscillations in plasma wakefield experiments at FACET
NASA Astrophysics Data System (ADS)
Adli, E.; Lindstrøm, C. A.; Allen, J.; Clarke, C. I.; Frederico, J.; Gessner, S. J.; Green, S. Z.; Hogan, M. J.; Litos, M. D.; White, G. R.; Yakimenko, V.; An, W.; Clayton, C. E.; Marsh, K. A.; Mori, W. B.; Joshi, C.; Vafaei-Najafabadi, N.; Corde, S.; Lu, W.
2016-09-01
We study transverse effects in a plasma wakefield accelerator. Experimental data from FACET with asymmetry in the beam-plasma system is presented. Energy dependent centroid oscillations are observed on the accelerated part of the charge. The experimental results are compared to PIC simulations and theoretical estimates.
NASA Astrophysics Data System (ADS)
Yu, Miao; Li, Yan; Shu, Tong; Zhang, Yifan; Hong, Xiaobin; Qiu, Jifang; Zuo, Yong; Guo, Hongxiang; Li, Wei; Wu, Jian
2018-02-01
A method of recognizing 16QAM signal based on k-means clustering algorithm is proposed to mitigate the impact of transmitter finite extinction ratio. There are pilot symbols with 0.39% overhead assigned to be regarded as initial centroids of k-means clustering algorithm. Simulation result in 10 GBaud 16QAM system shows that the proposed method obtains higher precision of identification compared with traditional decision method for finite ER and IQ mismatch. Specially, the proposed method improves the required OSNR by 5.5 dB, 4.5 dB, 4 dB and 3 dB at FEC limit with ER= 12 dB, 16 dB, 20 dB and 24 dB, respectively, and the acceptable bias error and IQ mismatch range is widened by 767% and 360% with ER =16 dB, respectively.
Toward faster and more accurate star sensors using recursive centroiding and star identification
NASA Astrophysics Data System (ADS)
Samaan, Malak Anees
The objective of this research is to study different novel developed techniques for spacecraft attitude determination methods using star tracker sensors. This dissertation addresses various issues on developing improved star tracker software, presents new approaches for better performance of star trackers, and considers applications to realize high precision attitude estimates. Star-sensors are often included in a spacecraft attitude-system instrument suite, where high accuracy pointing capability is required. Novel methods for image processing, camera parameters ground calibration, autonomous star pattern recognition, and recursive star identification are researched and implemented to achieve high accuracy and a high frame rate star tracker that can be used for many space missions. This dissertation presents the methods and algorithms implemented for the one Field of View 'FOV'Star NavI sensor that was tested aboard the STS-107 mission in spring 2003 and the two fields of view StarNavII sensor for the EO-3 spacecraft scheduled for launch in 2007. The results of this research enable advances in spacecraft attitude determination based upon real time star sensing and pattern recognition. Building upon recent developments in image processing, pattern recognition algorithms, focal plane detectors, electro-optics, and microprocessors, the star tracker concept utilized in this research has the following key objectives for spacecraft of the future: lower cost, lower mass and smaller volume, increased robustness to environment-induced aging and instrument response variations, increased adaptability and autonomy via recursive self-calibration and health-monitoring on-orbit. Many of these attributes are consequences of improved algorithms that are derived in this dissertation.
Fine Guidance Sensing for Coronagraphic Observatories
NASA Technical Reports Server (NTRS)
Brugarolas, Paul; Alexander, James W.; Trauger, John T.; Moody, Dwight C.
2011-01-01
Three options have been developed for Fine Guidance Sensing (FGS) for coronagraphic observatories using a Fine Guidance Camera within a coronagraphic instrument. Coronagraphic observatories require very fine precision pointing in order to image faint objects at very small distances from a target star. The Fine Guidance Camera measures the direction to the target star. The first option, referred to as Spot, was to collect all of the light reflected from a coronagraph occulter onto a focal plane, producing an Airy-type point spread function (PSF). This would allow almost all of the starlight from the central star to be used for centroiding. The second approach, referred to as Punctured Disk, collects the light that bypasses a central obscuration, producing a PSF with a punctured central disk. The final approach, referred to as Lyot, collects light after passing through the occulter at the Lyot stop. The study includes generation of representative images for each option by the science team, followed by an engineering evaluation of a centroiding or a photometric algorithm for each option. After the alignment of the coronagraph to the fine guidance system, a "nulling" point on the FGS focal point is determined by calibration. This alignment is implemented by a fine alignment mechanism that is part of the fine guidance camera selection mirror. If the star images meet the modeling assumptions, and the star "centroid" can be driven to that nulling point, the contrast for the coronagraph will be maximized.
Nano-JASMINE: cosmic radiation degradation of CCD performance and centroid detection
NASA Astrophysics Data System (ADS)
Kobayashi, Yukiyasu; Shimura, Yuki; Niwa, Yoshito; Yano, Taihei; Gouda, Naoteru; Yamada, Yoshiyuki
2012-09-01
Nano-JASMINE (NJ) is a very small astrometry satellite project led by the National Astronomical Observatory of Japan. The satellite is ready for launch, and the launch is currently scheduled for late 2013 or early 2014. The satellite is equipped with a fully depleted CCD and is expected to perform astrometry observations for stars brighter than 9 mag in the zw-band (0.6 µm-1.0 µm). Distances of stars located within 100 pc of the Sun can be determined by using annual parallax measurements. The targeted accuracy for the position determination of stars brighter than 7.5 mag is 3 mas, which is equivalent to measuring the positions of stars with an accuracy of less than one five-hundredth of the CCD pixel size. The position measurements of stars are performed by centroiding the stellar images taken by the CCD that operates in the time and delay integration mode. The degradation of charge transfer performance due to cosmic radiation damage in orbit is proved experimentally. A method is then required to compensate for the effects of performance degradation. One of the most effective ways of achieving this is to simulate observed stellar outputs, including the effect of CCD degradation, and then formulate our centroiding algorithm and evaluate the accuracies of the measurements. We report here the planned procedure to simulate the outputs of the NJ observations. We also developed a CCD performance-measuring system and present preliminary results obtained using the system.
Chen, Ellison; Amano, Keiko; Pedoia, Valentina; Souza, Richard B; Ma, C Benjamin; Li, Xiaojuan
2018-04-18
Patients who have suffered ACL injury are more likely to develop early onset post-traumatic osteoarthritis despite reconstruction. The purpose of our study was to evaluate the longitudinal changes in the tibiofemoral cartilage contact area size and location after ACL injury and reconstruction. Thirty-one patients with isolated unilateral ACL injury were followed with T 2 weighted Fast Spin Echo, T 1ρ and T 2 MRI at baseline prior to reconstruction, and 6 months, 1 year, and 2 years after surgery. Areas were delineated in FSE images with an in-house Matlab program using a spline-based semi-automated segmentation algorithm. Tibiofemoral contact area and centroid position along the anterior-posterior axis were calculated along with T 1ρ and T 2 relaxation times on both the injured and non-injured knees. At baseline, the injured knees had significantly smaller and more posteriorly positioned contact areas on the medial tibial surface compared to corresponding healthy knees. These differences persisted 6 months after reconstruction. Moreover, subjects with more anterior medial centroid positions at 6 months had elevated T 1ρ and T 2 measures in the posterior medial tibial plateau at 1 year. Changes in contact area and centroid position after ACL injury and reconstruction may characterize some of the mechanical factors contributing to post-traumatic osteoarthritis. © 2018 Orthopaedic Research Society. Published by Wiley Periodicals, Inc. J Orthop Res. © 2018 Orthopaedic Research Society. Published by Wiley Periodicals, Inc.
Optic disc segmentation: level set methods and blood vessels inpainting
NASA Astrophysics Data System (ADS)
Almazroa, A.; Sun, Weiwei; Alodhayb, Sami; Raahemifar, Kaamran; Lakshminarayanan, Vasudevan
2017-03-01
Segmenting the optic disc (OD) is an important and essential step in creating a frame of reference for diagnosing optic nerve head (ONH) pathology such as glaucoma. Therefore, a reliable OD segmentation technique is necessary for automatic screening of ONH abnormalities. The main contribution of this paper is in presenting a novel OD segmentation algorithm based on applying a level set method on a localized OD image. To prevent the blood vessels from interfering with the level set process, an inpainting technique is applied. The algorithm is evaluated using a new retinal fundus image dataset called RIGA (Retinal Images for Glaucoma Analysis). In the case of low quality images, a double level set is applied in which the first level set is considered to be a localization for the OD. Five hundred and fifty images are used to test the algorithm accuracy as well as its agreement with manual markings by six ophthalmologists. The accuracy of the algorithm in marking the optic disc area and centroid is 83.9%, and the best agreement is observed between the results of the algorithm and manual markings in 379 images.
Fast Automatic Segmentation of White Matter Streamlines Based on a Multi-Subject Bundle Atlas.
Labra, Nicole; Guevara, Pamela; Duclap, Delphine; Houenou, Josselin; Poupon, Cyril; Mangin, Jean-François; Figueroa, Miguel
2017-01-01
This paper presents an algorithm for fast segmentation of white matter bundles from massive dMRI tractography datasets using a multisubject atlas. We use a distance metric to compare streamlines in a subject dataset to labeled centroids in the atlas, and label them using a per-bundle configurable threshold. In order to reduce segmentation time, the algorithm first preprocesses the data using a simplified distance metric to rapidly discard candidate streamlines in multiple stages, while guaranteeing that no false negatives are produced. The smaller set of remaining streamlines is then segmented using the original metric, thus eliminating any false positives from the preprocessing stage. As a result, a single-thread implementation of the algorithm can segment a dataset of almost 9 million streamlines in less than 6 minutes. Moreover, parallel versions of our algorithm for multicore processors and graphics processing units further reduce the segmentation time to less than 22 seconds and to 5 seconds, respectively. This performance enables the use of the algorithm in truly interactive applications for visualization, analysis, and segmentation of large white matter tractography datasets.
Guo, Yang-Yang; He, Dong-Jian; Liu, Cong
2018-06-25
Insect behaviour is an important research topic in plant protection. To study insect behaviour accurately, it is necessary to observe and record their flight trajectory quantitatively and precisely in three dimensions (3D). The goal of this research was to analyse frames extracted from videos using Kernelized Correlation Filters (KCF) and Background Subtraction (BS) (KCF-BS) to plot the 3D trajectory of cabbage butterfly (P. rapae). Considering the experimental environment with a wind tunnel, a quadrature binocular vision insect video capture system was designed and applied in this study. The KCF-BS algorithm was used to track the butterfly in video frames and obtain coordinates of the target centroid in two videos. Finally the 3D trajectory was calculated according to the matching relationship in the corresponding frames of two angles in the video. To verify the validity of the KCF-BS algorithm, Compressive Tracking (CT) and Spatio-Temporal Context Learning (STC) algorithms were performed. The results revealed that the KCF-BS tracking algorithm performed more favourably than CT and STC in terms of accuracy and robustness.
Characterization of the ASPIICS/OPSE metrology sub-system and PSF centroiding procedure
NASA Astrophysics Data System (ADS)
Loreggia, D.; Fineschi, S.; Capobianco, G.; Bemporad, A.; Focardi, M.; Landini, F.; Massone, G.; Casti, M.; Nicolini, G.; Pancrazi, M.; Romoli, M.; Noce, V.; Baccani, C.; Cernica, I.; Purica, M.; Nisulescu, M.; Thizy, C.; Servaye, J. S.; Renotte, E.
2016-07-01
years have raised increasing interest. Many applications of astronomical observation techniques, as coronography and interferometry get great benefit when moved in space and the employment of diluted systems represents a milestone to step-over in astronomical research. In this work, we present the Optical Position Sensors Emitter (OPSE) metrological sub-system on-board of the PROBA3. PROBA3 is an ESA technology mission that will test in-orbit many metrology techniques for the maintenance of a Formation Flying with two satellites, in this case an occulter and a main satellite housing a coronagraph named ASPIICS, kept at an average inter-distance of 144m. The scientific task is the observation of the Sun's Corona at high spatial and temporal resolution down to 1.08R⊙. The OPSE will monitor the relative position of the two satellites and consists of 3 emitters positioned on the rear surface of the occulter, that will be observed by the coronagraph itself. A Centre of Gravity (CoG) algorithm is used to monitor the emitter's PSF at the focal plane of the Coronagraph retrieving the Occulter position with respect to the main spacecraft. The 3σ location target accuracy is 300μm for lateral movement and 21cm for longitudinal movements. A description of the characterization tests on the OPSE LED sources, and of the design for a laboratory set-up for on ground testing is given with a preliminary assessment of the performances expected from the OPSE images centroiding algorithm.
Shifts in the ecological niche of Lutzomyia peruensis under climate change scenarios in Peru.
Moo-Llanes, D A; Arque-Chunga, W; Carmona-Castro, O; Yañez-Arenas, C; Yañez-Trujillano, H H; Cheverría-Pacheco, L; Baak-Baak, C M; Cáceres, A G
2017-06-01
The Peruvian Andes presents a climate suitable for many species of sandfly that are known vectors of leishmaniasis or bartonellosis, including Lutzomyia peruensis (Diptera: Psychodidae), among others. In the present study, occurrences data for Lu. peruensis were compiled from several items in the scientific literature from Peru published between 1927 and 2015. Based on these data, ecological niche models were constructed to predict spatial distributions using three algorithms [Support vector machine (SVM), the Genetic Algorithm for Rule-set Prediction (GARP) and Maximum Entropy (MaxEnt)]. In addition, the environmental requirements of Lu. peruensis and three niche characteristics were modelled in the context of future climate change scenarios: (a) potential changes in niche breadth; (b) shifts in the direction and magnitude of niche centroids, and (c) shifts in elevation range. The model identified areas that included environments suitable for Lu. peruensis in most regions of Peru (45.77%) and an average altitude of 3289 m a.s.l. Under climate change scenarios, a decrease in the distribution areas of Lu. peruensis was observed for all representative concentration pathways. However, the centroid of the species' ecological niche showed a northwest direction in all climate change scenarios. The information generated in this study may help health authorities responsible for the supervision of strategies to control leishmaniasis to coordinate, plan and implement appropriate strategies for each area of risk, taking into account the geographic distribution and potential dispersal of Lu. peruensis. © 2017 The Royal Entomological Society.
A Novel Multiobjective Evolutionary Algorithm Based on Regression Analysis
Song, Zhiming; Wang, Maocai; Dai, Guangming; Vasile, Massimiliano
2015-01-01
As is known, the Pareto set of a continuous multiobjective optimization problem with m objective functions is a piecewise continuous (m − 1)-dimensional manifold in the decision space under some mild conditions. However, how to utilize the regularity to design multiobjective optimization algorithms has become the research focus. In this paper, based on this regularity, a model-based multiobjective evolutionary algorithm with regression analysis (MMEA-RA) is put forward to solve continuous multiobjective optimization problems with variable linkages. In the algorithm, the optimization problem is modelled as a promising area in the decision space by a probability distribution, and the centroid of the probability distribution is (m − 1)-dimensional piecewise continuous manifold. The least squares method is used to construct such a model. A selection strategy based on the nondominated sorting is used to choose the individuals to the next generation. The new algorithm is tested and compared with NSGA-II and RM-MEDA. The result shows that MMEA-RA outperforms RM-MEDA and NSGA-II on the test instances with variable linkages. At the same time, MMEA-RA has higher efficiency than the other two algorithms. A few shortcomings of MMEA-RA have also been identified and discussed in this paper. PMID:25874246
Enhanced K-means clustering with encryption on cloud
NASA Astrophysics Data System (ADS)
Singh, Iqjot; Dwivedi, Prerna; Gupta, Taru; Shynu, P. G.
2017-11-01
This paper tries to solve the problem of storing and managing big files over cloud by implementing hashing on Hadoop in big-data and ensure security while uploading and downloading files. Cloud computing is a term that emphasis on sharing data and facilitates to share infrastructure and resources.[10] Hadoop is an open source software that gives us access to store and manage big files according to our needs on cloud. K-means clustering algorithm is an algorithm used to calculate distance between the centroid of the cluster and the data points. Hashing is a algorithm in which we are storing and retrieving data with hash keys. The hashing algorithm is called as hash function which is used to portray the original data and later to fetch the data stored at the specific key. [17] Encryption is a process to transform electronic data into non readable form known as cipher text. Decryption is the opposite process of encryption, it transforms the cipher text into plain text that the end user can read and understand well. For encryption and decryption we are using Symmetric key cryptographic algorithm. In symmetric key cryptography are using DES algorithm for a secure storage of the files. [3
GPU-based simulation of optical propagation through turbulence for active and passive imaging
NASA Astrophysics Data System (ADS)
Monnier, Goulven; Duval, François-Régis; Amram, Solène
2014-10-01
IMOTEP is a GPU-based (Graphical Processing Units) software relying on a fast parallel implementation of Fresnel diffraction through successive phase screens. Its applications include active imaging, laser telemetry and passive imaging through turbulence with anisoplanatic spatial and temporal fluctuations. Thanks to parallel implementation on GPU, speedups ranging from 40X to 70X are achieved. The present paper gives a brief overview of IMOTEP models, algorithms, implementation and user interface. It then focuses on major improvements recently brought to the anisoplanatic imaging simulation method. Previously, we took advantage of the computational power offered by the GPU to develop a simulation method based on large series of deterministic realisations of the PSF distorted by turbulence. The phase screen propagation algorithm, by reproducing higher moments of the incident wavefront distortion, provides realistic PSFs. However, we first used a coarse gaussian model to fit the numerical PSFs and characterise there spatial statistics through only 3 parameters (two-dimensional displacements of centroid and width). Meanwhile, this approach was unable to reproduce the effects related to the details of the PSF structure, especially the "speckles" leading to prominent high-frequency content in short-exposure images. To overcome this limitation, we recently implemented a new empirical model of the PSF, based on Principal Components Analysis (PCA), ought to catch most of the PSF complexity. The GPU implementation allows estimating and handling efficiently the numerous (up to several hundreds) principal components typically required under the strong turbulence regime. A first demanding computational step involves PCA, phase screen propagation and covariance estimates. In a second step, realistic instantaneous images, fully accounting for anisoplanatic effects, are quickly generated. Preliminary results are presented.
Aeroelastically coupled blades for vertical axis wind turbines
Paquette, Joshua; Barone, Matthew F.
2016-02-23
Various technologies described herein pertain to a vertical axis wind turbine blade configured to rotate about a rotation axis. The vertical axis wind turbine blade includes at least an attachment segment, a rear swept segment, and optionally, a forward swept segment. The attachment segment is contiguous with the forward swept segment, and the forward swept segment is contiguous with the rear swept segment. The attachment segment includes a first portion of a centroid axis, the forward swept segment includes a second portion of the centroid axis, and the rear swept segment includes a third portion of the centroid axis. The second portion of the centroid axis is angularly displaced ahead of the first portion of the centroid axis and the third portion of the centroid axis is angularly displaced behind the first portion of the centroid axis in the direction of rotation about the rotation axis.
Velazquez-Pupo, Roxana; Sierra-Romero, Alberto; Torres-Roman, Deni; Shkvarko, Yuriy V.; Romero-Delgado, Misael
2018-01-01
This paper presents a high performance vision-based system with a single static camera for traffic surveillance, for moving vehicle detection with occlusion handling, tracking, counting, and One Class Support Vector Machine (OC-SVM) classification. In this approach, moving objects are first segmented from the background using the adaptive Gaussian Mixture Model (GMM). After that, several geometric features are extracted, such as vehicle area, height, width, centroid, and bounding box. As occlusion is present, an algorithm was implemented to reduce it. The tracking is performed with adaptive Kalman filter. Finally, the selected geometric features: estimated area, height, and width are used by different classifiers in order to sort vehicles into three classes: small, midsize, and large. Extensive experimental results in eight real traffic videos with more than 4000 ground truth vehicles have shown that the improved system can run in real time under an occlusion index of 0.312 and classify vehicles with a global detection rate or recall, precision, and F-measure of up to 98.190%, and an F-measure of up to 99.051% for midsize vehicles. PMID:29382078
NASA Technical Reports Server (NTRS)
Strekalov, Dmitry V.
2012-01-01
Ring Image Analyzer software analyzes images to recognize elliptical patterns. It determines the ellipse parameters (axes ratio, centroid coordinate, tilt angle). The program attempts to recognize elliptical fringes (e.g., Newton Rings) on a photograph and determine their centroid position, the short-to-long-axis ratio, and the angle of rotation of the long axis relative to the horizontal direction on the photograph. These capabilities are important in interferometric imaging and control of surfaces. In particular, this program has been developed and applied for determining the rim shape of precision-machined optical whispering gallery mode resonators. The program relies on a unique image recognition algorithm aimed at recognizing elliptical shapes, but can be easily adapted to other geometric shapes. It is robust against non-elliptical details of the image and against noise. Interferometric analysis of precision-machined surfaces remains an important technological instrument in hardware development and quality analysis. This software automates and increases the accuracy of this technique. The software has been developed for the needs of an R&TD-funded project and has become an important asset for the future research proposal to NASA as well as other agencies.
Photon-counting intensified random-access charge injection device
NASA Astrophysics Data System (ADS)
Norton, Timothy J.; Morrissey, Patrick F.; Haas, Patrick; Payne, Leslie J.; Carbone, Joseph; Kimble, Randy A.
1999-11-01
At NASA GSFC we are developing a high resolution solar-blind photon counting detector system for UV space based astronomy. The detector comprises a high gain MCP intensifier fiber- optically coupled to a charge injection device (CID). The detector system utilizes an FPGA based centroiding system to locate the center of photon events from the intensifier to high accuracy. The photon event addresses are passed via a PCI interface with a GPS derived time stamp inserted per frame to an integrating memory. Here we present imaging performance data which show resolution of MCP tube pore structure at an MCP pore diameter of 8 micrometer. This data validates the ICID concept for intensified photon counting readout. We also discuss correction techniques used in the removal of fixed pattern noise effects inherent in the centroiding algorithms used and present data which shows the local dynamic range of the device. Progress towards development of a true random access CID (RACID 810) is also discussed and astronomical data taken with the ICID detector system demonstrating the photon event time-tagging mode of the system is also presented.
NASA Astrophysics Data System (ADS)
Kreis, Karsten; Kremer, Kurt; Potestio, Raffaello; Tuckerman, Mark E.
2017-12-01
Path integral-based methodologies play a crucial role for the investigation of nuclear quantum effects by means of computer simulations. However, these techniques are significantly more demanding than corresponding classical simulations. To reduce this numerical effort, we recently proposed a method, based on a rigorous Hamiltonian formulation, which restricts the quantum modeling to a small but relevant spatial region within a larger reservoir where particles are treated classically. In this work, we extend this idea and show how it can be implemented along with state-of-the-art path integral simulation techniques, including path-integral molecular dynamics, which allows for the calculation of quantum statistical properties, and ring-polymer and centroid molecular dynamics, which allow the calculation of approximate quantum dynamical properties. To this end, we derive a new integration algorithm that also makes use of multiple time-stepping. The scheme is validated via adaptive classical-path-integral simulations of liquid water. Potential applications of the proposed multiresolution method are diverse and include efficient quantum simulations of interfaces as well as complex biomolecular systems such as membranes and proteins.
Fournier, Céline; Bridal, S Lori; Coron, Alain; Laugier, Pascal
2003-04-01
In vivo skin attenuation estimators must be applicable to backscattered radio frequency signals obtained in a pulse-echo configuration. This work compares three such estimators: short-time Fourier multinarrowband (MNB), short-time Fourier centroid shift (FC), and autoregressive centroid shift (ARC). All provide estimations of the attenuation slope (beta, dB x cm(-1) x MHz(-1)); MNB also provides an independent estimation of the mean attenuation level (IA, dB x cm(-1)). Practical approaches are proposed for data windowing, spectral variance characterization, and bandwidth selection. Then, based on simulated data, FC and ARC were selected as the best (compromise between bias and variance) attenuation slope estimators. The FC, ARC, and MNB were applied to in vivo human skin data acquired at 20 MHz to estimate betaFC, betaARC, and IA(MNB), respectively (without diffraction correction, between 11 and 27 MHz). Lateral heterogeneity had less effect and day-to-day reproducibility was smaller for IA than for beta. The IA and betaARC were dependent on pressure applied to skin during acquisition and IA on room and skin-surface temperatures. Negative values of IA imply that IA and beta may be influenced not only by skin's attenuation but also by structural heterogeneity across dermal depth. Even so, IA was correlated to subject age and IA, betaFC, and betaARC were dependent on subject gender. Thus, in vivo attenuation measurements reveal interesting variations with subject age and gender and thus appeared promising to detect skin structure modifications.
A New Soft Computing Method for K-Harmonic Means Clustering.
Yeh, Wei-Chang; Jiang, Yunzhi; Chen, Yee-Fen; Chen, Zhe
2016-01-01
The K-harmonic means clustering algorithm (KHM) is a new clustering method used to group data such that the sum of the harmonic averages of the distances between each entity and all cluster centroids is minimized. Because it is less sensitive to initialization than K-means (KM), many researchers have recently been attracted to studying KHM. In this study, the proposed iSSO-KHM is based on an improved simplified swarm optimization (iSSO) and integrates a variable neighborhood search (VNS) for KHM clustering. As evidence of the utility of the proposed iSSO-KHM, we present extensive computational results on eight benchmark problems. From the computational results, the comparison appears to support the superiority of the proposed iSSO-KHM over previously developed algorithms for all experiments in the literature.
Modified centroid for estimating sand, silt, and clay from soil texture class
USDA-ARS?s Scientific Manuscript database
Models that require inputs of soil particle size commonly use soil texture class for input; however, texture classes do not represent the continuum of soil size fractions. Soil texture class and clay percentage are collected as a standard practice for many land management agencies (e.g., NRCS, BLM, ...
Radar attenuation tomography using the centroid frequency downshift method
Liu, L.; Lane, J.W.; Quan, Y.
1998-01-01
A method for tomographically estimating electromagnetic (EM) wave attenuation based on analysis of centroid frequency downshift (CFDS) of impulse radar signals is described and applied to cross-hole radar data. The method is based on a constant-Q model, which assumes a linear frequency dependence of attenuation for EM wave propagation above the transition frequency. The method uses the CFDS to construct the projection function. In comparison with other methods for estimating attenuation, the CFDS method is relatively insensitive to the effects of geometric spreading, instrument response, and antenna coupling and radiation pattern, but requires the data to be broadband so that the frequency shift and variance can be easily measured. The method is well-suited for difference tomography experiments using electrically conductive tracers. The CFDS method was tested using cross-hole radar data collected at the U.S. Geological Survey Fractured Rock Research Site at Mirror Lake, New Hampshire (NH) during a saline-tracer injection experiment. The attenuation-difference tomogram created with the CFDS method outlines the spatial distribution of saline tracer within the tomography plane. ?? 1998 Elsevier Science B.V. All rights reserved.
Measuring zebrafish turning rate.
Mwaffo, Violet; Butail, Sachit; di Bernardo, Mario; Porfiri, Maurizio
2015-06-01
Zebrafish is becoming a popular animal model in preclinical research, and zebrafish turning rate has been proposed for the analysis of activity in several domains. The turning rate is often estimated from the trajectory of the fish centroid that is output by commercial or custom-made target tracking software run on overhead videos of fish swimming. However, the accuracy of such indirect methods with respect to the turning rate associated with changes in heading during zebrafish locomotion is largely untested. Here, we compare two indirect methods for the turning rate estimation using the centroid velocity or position data, with full shape tracking for three different video sampling rates. We use tracking data from the overhead video recorded at 60, 30, and 15 frames per second of zebrafish swimming in a shallow water tank. Statistical comparisons of absolute turning rate across methods and sampling rates indicate that, while indirect methods are indistinguishable from full shape tracking, the video sampling rate significantly influences the turning rate measurement. The results of this study can aid in the selection of the video capture frame rate, an experimental design parameter in zebrafish behavioral experiments where activity is an important measure.
2012-03-01
WMA) and used both the k-Nearest Neighbor ( KNN ) and Nearest Centroid 27 (a) Coalition and Regional (b) Indigenous Figure 3. RAND Study Models[32:98,115...NC) algorithms to classify future features. The study found that KNN performed better than NC with 85% or greater accuracy in all test cases. The...the model. 4.2.1 No Surge. On the 10th of January 2007, President George W. Bush delivered a speech to the American Public outlining a new strategy in
Shrinkage simplex-centroid designs for a quadratic mixture model
NASA Astrophysics Data System (ADS)
Hasan, Taha; Ali, Sajid; Ahmed, Munir
2018-03-01
A simplex-centroid design for q mixture components comprises of all possible subsets of the q components, present in equal proportions. The design does not contain full mixture blends except the overall centroid. In real-life situations, all mixture blends comprise of at least a minimum proportion of each component. Here, we introduce simplex-centroid designs which contain complete blends but with some loss in D-efficiency and stability in G-efficiency. We call such designs as shrinkage simplex-centroid designs. Furthermore, we use the proposed designs to generate component-amount designs by their projection.
A genetic graph-based approach for partitional clustering.
Menéndez, Héctor D; Barrero, David F; Camacho, David
2014-05-01
Clustering is one of the most versatile tools for data analysis. In the recent years, clustering that seeks the continuity of data (in opposition to classical centroid-based approaches) has attracted an increasing research interest. It is a challenging problem with a remarkable practical interest. The most popular continuity clustering method is the spectral clustering (SC) algorithm, which is based on graph cut: It initially generates a similarity graph using a distance measure and then studies its graph spectrum to find the best cut. This approach is sensitive to the parameters of the metric, and a correct parameter choice is critical to the quality of the cluster. This work proposes a new algorithm, inspired by SC, that reduces the parameter dependency while maintaining the quality of the solution. The new algorithm, named genetic graph-based clustering (GGC), takes an evolutionary approach introducing a genetic algorithm (GA) to cluster the similarity graph. The experimental validation shows that GGC increases robustness of SC and has competitive performance in comparison with classical clustering methods, at least, in the synthetic and real dataset used in the experiments.
Reducing the Volume of NASA Earth-Science Data
NASA Technical Reports Server (NTRS)
Lee, Seungwon; Braverman, Amy J.; Guillaume, Alexandre
2010-01-01
A computer program reduces data generated by NASA Earth-science missions into representative clusters characterized by centroids and membership information, thereby reducing the large volume of data to a level more amenable to analysis. The program effects an autonomous data-reduction/clustering process to produce a representative distribution and joint relationships of the data, without assuming a specific type of distribution and relationship and without resorting to domain-specific knowledge about the data. The program implements a combination of a data-reduction algorithm known as the entropy-constrained vector quantization (ECVQ) and an optimization algorithm known as the differential evolution (DE). The combination of algorithms generates the Pareto front of clustering solutions that presents the compromise between the quality of the reduced data and the degree of reduction. Similar prior data-reduction computer programs utilize only a clustering algorithm, the parameters of which are tuned manually by users. In the present program, autonomous optimization of the parameters by means of the DE supplants the manual tuning of the parameters. Thus, the program determines the best set of clustering solutions without human intervention.
NASA Astrophysics Data System (ADS)
Celenk, Mehmet; Song, Yinglei; Ma, Limin; Zhou, Min
2003-05-01
A new algorithm that can be used to automatically recognize and classify malignant lymphomas and lukemia is proposed in this paper. The algorithm utilizes the morphological watershed to extract boundaries of cells from their grey-level images. It generates a sequence of Euclidean distances by selecting pixels in clockwise direction on the boundary of the cell and calculating the Euclidean distances of the selected pixels from the centroid of the cell. A feature vector associated with each cell is then obtained by applying the auto-regressive moving-average (ARMA) model to the generated sequence of Euclidean distances. The clustering measure J3=trace{inverse(Sw-1)Sm} involving the within (Sw) and mixed (Sm) class-scattering matrices is computed for both cell classes to provide an insight into the extent to which different cell classes in the training data are separated. Our test results suggest that the algorithm is highly accurate for the development of an interactive, computer-assisted diagnosis (CAD) tool.
James F. Fowler; Carolyn Hull Sieg; Shaula Hedwall
2015-01-01
Population size and density estimates have traditionally been acceptable ways to track speciesâ response to changing environments; however, species' population centroid elevation has recently been an equally important metric. Packera franciscana (Greene) W.A. Weber and A. Love (Asteraceae; San Francisco Peaks ragwort) is a single mountain endemic plant found only...
Improved measurements of RNA structure conservation with generalized centroid estimators.
Okada, Yohei; Saito, Yutaka; Sato, Kengo; Sakakibara, Yasubumi
2011-01-01
Identification of non-protein-coding RNAs (ncRNAs) in genomes is a crucial task for not only molecular cell biology but also bioinformatics. Secondary structures of ncRNAs are employed as a key feature of ncRNA analysis since biological functions of ncRNAs are deeply related to their secondary structures. Although the minimum free energy (MFE) structure of an RNA sequence is regarded as the most stable structure, MFE alone could not be an appropriate measure for identifying ncRNAs since the free energy is heavily biased by the nucleotide composition. Therefore, instead of MFE itself, several alternative measures for identifying ncRNAs have been proposed such as the structure conservation index (SCI) and the base pair distance (BPD), both of which employ MFE structures. However, these measurements are unfortunately not suitable for identifying ncRNAs in some cases including the genome-wide search and incur high false discovery rate. In this study, we propose improved measurements based on SCI and BPD, applying generalized centroid estimators to incorporate the robustness against low quality multiple alignments. Our experiments show that our proposed methods achieve higher accuracy than the original SCI and BPD for not only human-curated structural alignments but also low quality alignments produced by CLUSTAL W. Furthermore, the centroid-based SCI on CLUSTAL W alignments is more accurate than or comparable with that of the original SCI on structural alignments generated with RAF, a high quality structural aligner, for which twofold expensive computational time is required on average. We conclude that our methods are more suitable for genome-wide alignments which are of low quality from the point of view on secondary structures than the original SCI and BPD.
Centroid of a Polygon--Three Views.
ERIC Educational Resources Information Center
Shilgalis, Thomas W.; Benson, Carol T.
2001-01-01
Investigates the idea of the center of mass of a polygon and illustrates centroids of polygons. Connects physics, mathematics, and technology to produces results that serve to generalize the notion of centroid to polygons other than triangles. (KHR)
Detection of Multi-Layer and Vertically-Extended Clouds Using A-Train Sensors
NASA Technical Reports Server (NTRS)
Joiner, J.; Vasilkov, A. P.; Bhartia, P. K.; Wind, G.; Platnick, S.; Menzel, W. P.
2010-01-01
The detection of mUltiple cloud layers using satellite observations is important for retrieval algorithms as well as climate applications. In this paper, we describe a relatively simple algorithm to detect multiple cloud layers and distinguish them from vertically-extended clouds. The algorithm can be applied to coincident passive sensors that derive both cloud-top pressure from the thermal infrared observations and an estimate of solar photon pathlength from UV, visible, or near-IR measurements. Here, we use data from the A-train afternoon constellation of satellites: cloud-top pressure, cloud optical thickness, the multi-layer flag from the Aqua MODerate-resolution Imaging Spectroradiometer (MODIS) and the optical centroid cloud pressure from the Aura Ozone Monitoring Instrument (OMI). For the first time, we use data from the CloudSat radar to evaluate the results of a multi-layer cloud detection scheme. The cloud classification algorithms applied with different passive sensor configurations compare well with each other as well as with data from CloudSat. We compute monthly mean fractions of pixels containing multi-layer and vertically-extended clouds for January and July 2007 at the OMI spatial resolution (l2kmx24km at nadir) and at the 5kmx5km MODIS resolution used for infrared cloud retrievals. There are seasonal variations in the spatial distribution of the different cloud types. The fraction of cloudy pixels containing distinct multi-layer cloud is a strong function of the pixel size. Globally averaged, these fractions are approximately 20% and 10% for OMI and MODIS, respectively. These fractions may be significantly higher or lower depending upon location. There is a much smaller resolution dependence for fractions of pixels containing vertically-extended clouds (approx.20% for OMI and slightly less for MODIS globally), suggesting larger spatial scales for these clouds. We also find higher fractions of vertically-extended clouds over land as compared with ocean, particularly in the tropics and summer hemisphere.
Voronoi Tessellations and Their Application to Climate and Global Modeling
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ju, Lili; Ringler, Todd; Gunzburger, Max
2011-01-01
We review the use of Voronoi tessellations for grid generation, especially on the whole sphere or in regions on the sphere. Voronoi tessellations and the corresponding Delaunay tessellations in regions and surfaces on Euclidean space are defined and properties they possess that make them well-suited for grid generation purposes are discussed, as are algorithms for their construction. This is followed by a more detailed look at one very special type of Voronoi tessellation, the centroidal Voronoi tessellation (CVT). After defining them, discussing some of their properties, and presenting algorithms for their construction, we illustrate the use of CVTs for producingmore » both quasi-uniform and variable resolution meshes in the plane and on the sphere. Finally, we briefly discuss the computational solution of model equations based on CVTs on the sphere.« less
Kaufmann, Anton; Butcher, Patrick
2006-01-01
Liquid chromatography coupled to orthogonal acceleration time-of-flight mass spectrometry (LC/TOF) provides an attractive alternative to liquid chromatography coupled to triple quadrupole mass spectrometry (LC/MS/MS) in the field of multiresidue analysis. The sensitivity and selectivity of LC/TOF approach those of LC/MS/MS. TOF provides accurate mass information and a significantly higher mass resolution than quadrupole analyzers. The available mass resolution of commercial TOF instruments ranging from 10 000 to 18 000 full width at half maximum (FWHM) is not, however, sufficient to completely exclude the problem of isobaric interferences (co-elution of analyte ions with matrix compounds of very similar mass). Due to the required data storage capacity, TOF raw data is commonly centroided before being electronically stored. However, centroiding can lead to a loss of data quality. The co-elution of a low intensity analyte peak with an isobaric, high intensity matrix compound can cause problems. Some centroiding algorithms might not be capable of deconvoluting such partially merged signals, leading to incorrect centroids.Co-elution of isobaric compounds has been deliberately simulated by injecting diluted binary mixtures of isobaric model substances at various relative intensities. Depending on the mass differences between the two isobaric compounds and the resolution provided by the TOF instrument, significant deviations in exact mass measurements and signal intensities were observed. The extraction of a reconstructed ion chromatogram based on very narrow mass windows can even result in the complete loss of the analyte signal. Guidelines have been proposed to avoid such problems. The use of sub-2 microm HPLC packing materials is recommended to improve chromatographic resolution and to reduce the risk of co-elution. The width of the extraction mass windows for reconstructed ion chromatograms should be defined according to the resolution of the TOF instrument. Alternative approaches include the spiking of the sample with appropriate analyte concentrations. Furthermore, enhanced software, capable of deconvoluting partially merged mass peaks, may become available. Copyright (c) 2006 John Wiley & Sons, Ltd.
Baños-Capilla, M C; García, M A; Bea, J; Pla, C; Larrea, L; López, E
2007-06-01
The quality of dosimetry in radiotherapy treatment requires the accurate delimitation of the gross tumor volume. This can be achieved by complementing the anatomical detail provided by CT images through fusion with other imaging modalities that provide additional metabolic and physiological information. Therefore, use of multiple imaging modalities for radiotherapy treatment planning requires an accurate image registration method. This work describes tests carried out on a Discovery LS positron emission/computed tomography (PET/CT) system by General Electric Medical Systems (GEMS), for its later use to obtain images to delimit the target in radiotherapy treatment. Several phantoms have been used to verify image correlation, in combination with fiducial markers, which were used as a system of external landmarks. We analyzed the geometrical accuracy of two different fusion methods with the images obtained with these phantoms. We first studied the fusion method used by the PET/CT system by GEMS (hardware fusion) on the basis that there is satisfactory coincidence between the reconstruction centers in CT and PET systems; and secondly the fiducial fusion, a registration method, by means of least-squares fitting algorithm of a landmark points system. The study concluded with the verification of the centroid position of some phantom components in both imaging modalities. Centroids were estimated through a calculation similar to center-of-mass, weighted by the value of the CT number and the uptake intensity in PET. The mean deviations found for the hardware fusion method were: deltax/ +/-sigma = 3.3 mm +/- 1.0 mm and /deltax/ +/-sigma = 3.6 mm +/- 1.0 mm. These values were substantially improved upon applying fiducial fusion based on external landmark points: /deltax/ +/-sigma = 0.7 mm +/- 0.8 mm and /deltax/ +/-sigma = 0.3 mm 1.7 mm. We also noted that differences found for each of the fusion methods were similar for both the axial and helical CT image acquisition protocols.
Star sub-pixel centroid calculation based on multi-step minimum energy difference method
NASA Astrophysics Data System (ADS)
Wang, Duo; Han, YanLi; Sun, Tengfei
2013-09-01
The star's centroid plays a vital role in celestial navigation, star images which be gotten during daytime, due to the strong sky background, have a low SNR, and the star objectives are nearly submerged in the background, takes a great trouble to the centroid localization. Traditional methods, such as a moment method, weighted centroid calculation method is simple but has a big error, especially in the condition of a low SNR. Gaussian method has a high positioning accuracy, but the computational complexity. Analysis of the energy distribution in star image, a location method for star target centroids based on multi-step minimum energy difference is proposed. This method uses the linear superposition to narrow the centroid area, in the certain narrow area uses a certain number of interpolation to pixels for the pixels' segmentation, and then using the symmetry of the stellar energy distribution, tentatively to get the centroid position: assume that the current pixel is the star centroid position, and then calculates and gets the difference of the sum of the energy which in the symmetric direction(in this paper we take the two directions of transverse and longitudinal) and the equal step length(which can be decided through different conditions, the paper takes 9 as the step length) of the current pixel, and obtain the centroid position in this direction when the minimum difference appears, and so do the other directions, then the validation comparison of simulated star images, and compare with several traditional methods, experiments shows that the positioning accuracy of the method up to 0.001 pixel, has good effect to calculate the centroid of low SNR conditions; at the same time, uses this method on a star map which got at the fixed observation site during daytime in near-infrared band, compare the results of the paper's method with the position messages which were known of the star, it shows that :the multi-step minimum energy difference method achieves a better effect.
A recursive technique for adaptive vector quantization
NASA Technical Reports Server (NTRS)
Lindsay, Robert A.
1989-01-01
Vector Quantization (VQ) is fast becoming an accepted, if not preferred method for image compression. The VQ performs well when compressing all types of imagery including Video, Electro-Optical (EO), Infrared (IR), Synthetic Aperture Radar (SAR), Multi-Spectral (MS), and digital map data. The only requirement is to change the codebook to switch the compressor from one image sensor to another. There are several approaches for designing codebooks for a vector quantizer. Adaptive Vector Quantization is a procedure that simultaneously designs codebooks as the data is being encoded or quantized. This is done by computing the centroid as a recursive moving average where the centroids move after every vector is encoded. When computing the centroid of a fixed set of vectors the resultant centroid is identical to the previous centroid calculation. This method of centroid calculation can be easily combined with VQ encoding techniques. The defined quantizer changes after every encoded vector by recursively updating the centroid of minimum distance which is the selected by the encoder. Since the quantizer is changing definition or states after every encoded vector, the decoder must now receive updates to the codebook. This is done as side information by multiplexing bits into the compressed source data.
NASA Astrophysics Data System (ADS)
Karastathis, Vassilios; Papoulia, Joanna; di Fiore, Boris; Makris, Jannis; Tsambas, Anestis; Stampolidis, Alexandros; Papadopoulos, Gerassimos
2010-05-01
Along the coast of the North Evian Gulf, Central Greece, there are significant geothermal sites, thermal springs as Aedipsos, Yaltra, Lichades, Ilia, Kamena Vourla, Thermopylae etc. but also volcanoes of the Quaternary - Pleistocene age as Lichades and Vromolimni. Since for these local volcanoes and geothermal fields, their deep origin and their relation with the ones of the wider region have not been clarified yet in detail, we attempted a deep structure investigation by conducting a 3D local earthquake tomography study in combination with Curie Depth analysis from aeromagnetic data. A seismographic network of 23 portable land-stations and 7 OBS was deployed in the area of North Evian Gulf to record the microseismic activity for a 4-month period. Two thousand events were located with ML 0.7 to 4.5. To build the 3D seismic velocity structure for the investigation area, we implemented traveltime inversion with algorithm SIMULPS14 on the 540 best located events. The code performed simultaneous inversion of the model parameters Vp, Vp/Vs and hypocenter locations. In order to select a reliable 1D starting model for the tomography inversion, the seismic arrivals were inverted at first with the algorithm VELEST (minimum 1D velocity model). The values of the damping factor parameter were chosen with the aid of the trade-off curve between the model variance and data variance. Six horizontal slices of the 3D P-wave velocity model and the respective ones of the Poisson ratio are constructed. We also set a reliability limit on the sections based on the comparison between the graphical representations of the diagonal elements of the resolution matrix (RDE) and the recovery ability of "checkerboard" models. To estimate the Curie Depth Point we followed the centroid procedures so, the filtered residual dataset of the area was subdivided in 5 square subregions, named C1 up to C5, sized 90x90 km2 and overlapped each other by 70%. In each subregion the radially averaged power spectra was computed. The slope of the longest wavelength part for each subregion yield the centroid depth, zo, of the deepest layer of magnetic sources, while the slope of the second longest wavelength spectral segment yield the depth to the top, zt, for the some layer. Using the formula zb=2zo-zt the Curie Depth estimation was derived for each subregion C an assigned at its centre. The estimated depths are between 7 and 8.1 km below sea level. The results showed the existence of a low seismic velocity volume with high Poisson ratio at greater to 8 km depths. Since the Curie Depth Point analysis estimated the demagnetization of the material due to high temperatures at the top of this volume, we led to consider that this volume is related with the presence of a magma chamber. Below the sites of the quaternary volcanoes of Lichades, Vromolimni and Ag. Ioannis there is a local increase of the seismic velocity over the low velocity anomaly. This was attributed to a crystallized magma volume below the volcanoes. The coincidence of the spatial distribution of surface geothermal sites and volcanoes with the deep low velocity anomaly enhanced our consideration for magma presence at this anomaly. The seismic slices of 4 km depth showed that the supply of the thermal springs at the surface is related with the main faulted zones of the area.
NASA Astrophysics Data System (ADS)
Li, Xin; Zhou, Shihong; Ma, Jing; Tan, Liying; Shen, Tao
2013-08-01
CMOS is a good candidate tracking detector for satellite optical communications systems with outstanding feature of sub-window for the development of APS (Active Pixel Sensor) technology. For inter-satellite optical communications it is critical to estimate the direction of incident laser beam precisely by measuring the centroid position of incident beam spot. The presence of detector noise results in measurement error, which degrades the tracking performance of systems. In this research, the measurement error of CMOS is derived taking consideration of detector noise. It is shown that the measurement error depends on pixel noise, size of the tracking sub-window (pixels number), intensity of incident laser beam, relative size of beam spot. The influences of these factors are analyzed by numerical simulation. We hope the results obtained in this research will be helpful in the design of CMOS detector satellite optical communications systems.
Recognizing human activities using appearance metric feature and kinematics feature
NASA Astrophysics Data System (ADS)
Qian, Huimin; Zhou, Jun; Lu, Xinbiao; Wu, Xinye
2017-05-01
The problem of automatically recognizing human activities from videos through the fusion of the two most important cues, appearance metric feature and kinematics feature, is considered. And a system of two-dimensional (2-D) Poisson equations is introduced to extract the more discriminative appearance metric feature. Specifically, the moving human blobs are first detected out from the video by background subtraction technique to form a binary image sequence, from which the appearance feature designated as the motion accumulation image and the kinematics feature termed as centroid instantaneous velocity are extracted. Second, 2-D discrete Poisson equations are employed to reinterpret the motion accumulation image to produce a more differentiated Poisson silhouette image, from which the appearance feature vector is created through the dimension reduction technique called bidirectional 2-D principal component analysis, considering the balance between classification accuracy and time consumption. Finally, a cascaded classifier based on the nearest neighbor classifier and two directed acyclic graph support vector machine classifiers, integrated with the fusion of the appearance feature vector and centroid instantaneous velocity vector, is applied to recognize the human activities. Experimental results on the open databases and a homemade one confirm the recognition performance of the proposed algorithm.
Spatial-Temporal dynamics of Newtonian and viscoelastic turbulence
NASA Astrophysics Data System (ADS)
Wang, Sung-Ning; Graham, Michael
2015-11-01
Introducing a trace amount of polymer into liquid turbulent flows can result in substantial reduction of friction drag. This phenomenon has been widely used in fluid transport, such as the Alaska crude oil pipeline. However, the mechanism is not well understood. We conduct direct numerical simulations of Newtonian and viscoelastic turbulence in large domains, in which the flow shows different characteristics in different regions. In some areas the drag is low and vortex motions are quiescent, while in other areas the drag is higher and the motions are more active. To identify these regions, we apply a statistical method, k-means clustering, which partitions the observations into k clusters by assigning each observation to its nearest centroid. The resulting partition maximizes the between-cluster variance. In the simulations, the observations are the instantaneous wall shear rate. Regions with different levels of drag are automatically identified by the partitioning algorithm. We find that the velocity profiles of the centroids exhibit characteristics similar to the individual coherent structures observed in minimal domain simulations. In addition, as viscoelasticity increases, polymer stretch becomes strongly correlated with wall shear stress. This work was supported by NSF grant CBET-1510291.
Realtime system for GLAS on WHT
NASA Astrophysics Data System (ADS)
Skvarč, Jure; Tulloch, Simon; Myers, Richard M.
2006-06-01
The new ground layer adaptive optics system (GLAS) on the William Herschel Telescope (WHT) on La Palma will be based on the existing natural guide star adaptive optics system called NAOMI. A part of the new developments is a new control system for the tip-tilt mirror. Instead of the existing system, built around a custom built multiprocessor computer made of C40 DSPs, this system uses an ordinary PC machine and a Linux operating system. It is equipped with a high sensitivity L3 CCD camera with effective readout noise of nearly zero. The software design for the tip-tilt system is being completely redeveloped, in order to make a use of object oriented design which should facilitate easier integration with the rest of the observing system at the WHT. The modular design of the system allows incorporation of different centroiding and loop control methods. To test the system off-sky, we have built a laboratory bench using an artificial light source and a tip-tilt mirror. We present results of tip-tilt correction quality using different centroiding algorithms and different control loop methods at different light levels. This system will serve as a testing ground for a transition to a completely PC-based real-time control system.
Efficient fuzzy C-means architecture for image segmentation.
Li, Hui-Ya; Hwang, Wen-Jyi; Chang, Chia-Yen
2011-01-01
This paper presents a novel VLSI architecture for image segmentation. The architecture is based on the fuzzy c-means algorithm with spatial constraint for reducing the misclassification rate. In the architecture, the usual iterative operations for updating the membership matrix and cluster centroid are merged into one single updating process to evade the large storage requirement. In addition, an efficient pipelined circuit is used for the updating process for accelerating the computational speed. Experimental results show that the the proposed circuit is an effective alternative for real-time image segmentation with low area cost and low misclassification rate.
Multiobjective Resource-Constrained Project Scheduling with a Time-Varying Number of Tasks
Abello, Manuel Blanco
2014-01-01
In resource-constrained project scheduling (RCPS) problems, ongoing tasks are restricted to utilizing a fixed number of resources. This paper investigates a dynamic version of the RCPS problem where the number of tasks varies in time. Our previous work investigated a technique called mapping of task IDs for centroid-based approach with random immigrants (McBAR) that was used to solve the dynamic problem. However, the solution-searching ability of McBAR was investigated over only a few instances of the dynamic problem. As a consequence, only a small number of characteristics of McBAR, under the dynamics of the RCPS problem, were found. Further, only a few techniques were compared to McBAR with respect to its solution-searching ability for solving the dynamic problem. In this paper, (a) the significance of the subalgorithms of McBAR is investigated by comparing McBAR to several other techniques; and (b) the scope of investigation in the previous work is extended. In particular, McBAR is compared to a technique called, Estimation Distribution Algorithm (EDA). As with McBAR, EDA is applied to solve the dynamic problem, an application that is unique in the literature. PMID:24883398
NASA Astrophysics Data System (ADS)
Ng, Theam Foo; Pham, Tuan D.; Zhou, Xiaobo
2010-01-01
With the fast development of multi-dimensional data compression and pattern classification techniques, vector quantization (VQ) has become a system that allows large reduction of data storage and computational effort. One of the most recent VQ techniques that handle the poor estimation of vector centroids due to biased data from undersampling is to use fuzzy declustering-based vector quantization (FDVQ) technique. Therefore, in this paper, we are motivated to propose a justification of FDVQ based hidden Markov model (HMM) for investigating its effectiveness and efficiency in classification of genotype-image phenotypes. The performance evaluation and comparison of the recognition accuracy between a proposed FDVQ based HMM (FDVQ-HMM) and a well-known LBG (Linde, Buzo, Gray) vector quantization based HMM (LBG-HMM) will be carried out. The experimental results show that the performances of both FDVQ-HMM and LBG-HMM are almost similar. Finally, we have justified the competitiveness of FDVQ-HMM in classification of cellular phenotype image database by using hypotheses t-test. As a result, we have validated that the FDVQ algorithm is a robust and an efficient classification technique in the application of RNAi genome-wide screening image data.
NASA Technical Reports Server (NTRS)
Gedney, Stephen D.; Lansing, Faiza
1993-01-01
The generalized Yee-algorithm is presented for the temporal full-wave analysis of planar microstrip devices. This algorithm has the significant advantage over the traditional Yee-algorithm in that it is based on unstructured and irregular grids. The robustness of the generalized Yee-algorithm is that structures that contain curved conductors or complex three-dimensional geometries can be more accurately, and much more conveniently modeled using standard automatic grid generation techniques. This generalized Yee-algorithm is based on the the time-marching solution of the discrete form of Maxwell's equations in their integral form. To this end, the electric and magnetic fields are discretized over a dual, irregular, and unstructured grid. The primary grid is assumed to be composed of general fitted polyhedra distributed throughout the volume. The secondary grid (or dual grid) is built up of the closed polyhedra whose edges connect the centroid's of adjacent primary cells, penetrating shared faces. Faraday's law and Ampere's law are used to update the fields normal to the primary and secondary grid faces, respectively. Subsequently, a correction scheme is introduced to project the normal fields onto the grid edges. It is shown that this scheme is stable, maintains second-order accuracy, and preserves the divergenceless nature of the flux densities. Finally, for computational efficiency the algorithm is structured as a series of sparse matrix-vector multiplications. Based on this scheme, the generalized Yee-algorithm has been implemented on vector and parallel high performance computers in a highly efficient manner.
Time-resolved speckle effects on the estimation of laser-pulse arrival times
NASA Technical Reports Server (NTRS)
Tsai, B.-M.; Gardner, C. S.
1985-01-01
A maximum-likelihood (ML) estimator of the pulse arrival in laser ranging and altimetry is derived for the case of a pulse distorted by shot noise and time-resolved speckle. The performance of the estimator is evaluated for pulse reflections from flat diffuse targets and compared with the performance of a suboptimal centroid estimator and a suboptimal Bar-David ML estimator derived under the assumption of no speckle. In the large-signal limit the accuracy of the estimator was found to improve as the width of the receiver observational interval increases. The timing performance of the estimator is expected to be highly sensitive to background noise when the received pulse energy is high and the receiver observational interval is large. Finally, in the speckle-limited regime the ML estimator performs considerably better than the suboptimal estimators.
Optic cup segmentation: type-II fuzzy thresholding approach and blood vessel extraction
Almazroa, Ahmed; Alodhayb, Sami; Raahemifar, Kaamran; Lakshminarayanan, Vasudevan
2017-01-01
We introduce here a new technique for segmenting optic cup using two-dimensional fundus images. Cup segmentation is the most challenging part of image processing of the optic nerve head due to the complexity of its structure. Using the blood vessels to segment the cup is important. Here, we report on blood vessel extraction using first a top-hat transform and Otsu’s segmentation function to detect the curves in the blood vessels (kinks) which indicate the cup boundary. This was followed by an interval type-II fuzzy entropy procedure. Finally, the Hough transform was applied to approximate the cup boundary. The algorithm was evaluated on 550 fundus images from a large dataset, which contained three different sets of images, where the cup was manually marked by six ophthalmologists. On one side, the accuracy of the algorithm was tested on the three image sets independently. The final cup detection accuracy in terms of area and centroid was calculated to be 78.2% of 441 images. Finally, we compared the algorithm performance with manual markings done by the six ophthalmologists. The agreement was determined between the ophthalmologists as well as the algorithm. The best agreement was between ophthalmologists one, two and five in 398 of 550 images, while the algorithm agreed with them in 356 images. PMID:28515636
Optic cup segmentation: type-II fuzzy thresholding approach and blood vessel extraction.
Almazroa, Ahmed; Alodhayb, Sami; Raahemifar, Kaamran; Lakshminarayanan, Vasudevan
2017-01-01
We introduce here a new technique for segmenting optic cup using two-dimensional fundus images. Cup segmentation is the most challenging part of image processing of the optic nerve head due to the complexity of its structure. Using the blood vessels to segment the cup is important. Here, we report on blood vessel extraction using first a top-hat transform and Otsu's segmentation function to detect the curves in the blood vessels (kinks) which indicate the cup boundary. This was followed by an interval type-II fuzzy entropy procedure. Finally, the Hough transform was applied to approximate the cup boundary. The algorithm was evaluated on 550 fundus images from a large dataset, which contained three different sets of images, where the cup was manually marked by six ophthalmologists. On one side, the accuracy of the algorithm was tested on the three image sets independently. The final cup detection accuracy in terms of area and centroid was calculated to be 78.2% of 441 images. Finally, we compared the algorithm performance with manual markings done by the six ophthalmologists. The agreement was determined between the ophthalmologists as well as the algorithm. The best agreement was between ophthalmologists one, two and five in 398 of 550 images, while the algorithm agreed with them in 356 images.
Diffusion control for a tempered anomalous diffusion system using fractional-order PI controllers.
Juan Chen; Zhuang, Bo; Chen, YangQuan; Cui, Baotong
2017-05-09
This paper is concerned with diffusion control problem of a tempered anomalous diffusion system based on fractional-order PI controllers. The contribution of this paper is to introduce fractional-order PI controllers into the tempered anomalous diffusion system for mobile actuators motion and spraying control. For the proposed control force, convergence analysis of the system described by mobile actuator dynamical equations is presented based on Lyapunov stability arguments. Moreover, a new Centroidal Voronoi Tessellation (CVT) algorithm based on fractional-order PI controllers, henceforth called FOPI-based CVT algorithm, is provided together with a modified simulation platform called Fractional-Order Diffusion Mobile Actuator-Sensor 2-Dimension Fractional-Order Proportional Integral (FO-Diff-MAS2D-FOPI). Finally, extensive numerical simulations for the tempered anomalous diffusion process are presented to verify the effectiveness of our proposed fractional-order PI controllers. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.
Text String Detection from Natural Scenes by Structure-based Partition and Grouping
Yi, Chucai; Tian, YingLi
2012-01-01
Text information in natural scene images serves as important clues for many image-based applications such as scene understanding, content-based image retrieval, assistive navigation, and automatic geocoding. However, locating text from complex background with multiple colors is a challenging task. In this paper, we explore a new framework to detect text strings with arbitrary orientations in complex natural scene images. Our proposed framework of text string detection consists of two steps: 1) Image partition to find text character candidates based on local gradient features and color uniformity of character components. 2) Character candidate grouping to detect text strings based on joint structural features of text characters in each text string such as character size differences, distances between neighboring characters, and character alignment. By assuming that a text string has at least three characters, we propose two algorithms of text string detection: 1) adjacent character grouping method, and 2) text line grouping method. The adjacent character grouping method calculates the sibling groups of each character candidate as string segments and then merges the intersecting sibling groups into text string. The text line grouping method performs Hough transform to fit text line among the centroids of text candidates. Each fitted text line describes the orientation of a potential text string. The detected text string is presented by a rectangle region covering all characters whose centroids are cascaded in its text line. To improve efficiency and accuracy, our algorithms are carried out in multi-scales. The proposed methods outperform the state-of-the-art results on the public Robust Reading Dataset which contains text only in horizontal orientation. Furthermore, the effectiveness of our methods to detect text strings with arbitrary orientations is evaluated on the Oriented Scene Text Dataset collected by ourselves containing text strings in non-horizontal orientations. PMID:21411405
Text string detection from natural scenes by structure-based partition and grouping.
Yi, Chucai; Tian, YingLi
2011-09-01
Text information in natural scene images serves as important clues for many image-based applications such as scene understanding, content-based image retrieval, assistive navigation, and automatic geocoding. However, locating text from a complex background with multiple colors is a challenging task. In this paper, we explore a new framework to detect text strings with arbitrary orientations in complex natural scene images. Our proposed framework of text string detection consists of two steps: 1) image partition to find text character candidates based on local gradient features and color uniformity of character components and 2) character candidate grouping to detect text strings based on joint structural features of text characters in each text string such as character size differences, distances between neighboring characters, and character alignment. By assuming that a text string has at least three characters, we propose two algorithms of text string detection: 1) adjacent character grouping method and 2) text line grouping method. The adjacent character grouping method calculates the sibling groups of each character candidate as string segments and then merges the intersecting sibling groups into text string. The text line grouping method performs Hough transform to fit text line among the centroids of text candidates. Each fitted text line describes the orientation of a potential text string. The detected text string is presented by a rectangle region covering all characters whose centroids are cascaded in its text line. To improve efficiency and accuracy, our algorithms are carried out in multi-scales. The proposed methods outperform the state-of-the-art results on the public Robust Reading Dataset, which contains text only in horizontal orientation. Furthermore, the effectiveness of our methods to detect text strings with arbitrary orientations is evaluated on the Oriented Scene Text Dataset collected by ourselves containing text strings in nonhorizontal orientations.
Reliability of an experimental method to analyse the impact point on a golf ball during putting.
Richardson, Ashley K; Mitchell, Andrew C S; Hughes, Gerwyn
2015-06-01
This study aimed to examine the reliability of an experimental method identifying the location of the impact point on a golf ball during putting. Forty trials were completed using a mechanical putting robot set to reproduce a putt of 3.2 m, with four different putter-ball combinations. After locating the centre of the dimple pattern (centroid) the following variables were tested; distance of the impact point from the centroid, angle of the impact point from the centroid and distance of the impact point from the centroid derived from the X, Y coordinates. Good to excellent reliability was demonstrated in all impact variables reflected in very strong relative (ICC = 0.98-1.00) and absolute reliability (SEM% = 0.9-4.3%). The highest SEM% observed was 7% for the angle of the impact point from the centroid. In conclusion, the experimental method was shown to be reliable at locating the centroid location of a golf ball, therefore allowing for the identification of the point of impact with the putter head and is suitable for use in subsequent studies.
A focal plane metrology system and PSF centroiding experiment
NASA Astrophysics Data System (ADS)
Li, Haitao; Li, Baoquan; Cao, Yang; Li, Ligang
2016-10-01
In this paper, we present an overview of a detector array equipment metrology testbed and a micro-pixel centroiding experiment currently under development at the National Space Science Center, Chinese Academy of Sciences. We discuss on-going development efforts aimed at calibrating the intra-/inter-pixel quantum efficiency and pixel positions for scientific grade CMOS detector, and review significant progress in achieving higher precision differential centroiding for pseudo star images in large area back-illuminated CMOS detector. Without calibration of pixel positions and intrapixel response, we have demonstrated that the standard deviation of differential centroiding is below 2.0e-3 pixels.
High-speed on-chip windowed centroiding using photodiode-based CMOS imager
NASA Technical Reports Server (NTRS)
Pain, Bedabrata (Inventor); Sun, Chao (Inventor); Yang, Guang (Inventor); Cunningham, Thomas J. (Inventor); Hancock, Bruce (Inventor)
2003-01-01
A centroid computation system is disclosed. The system has an imager array, a switching network, computation elements, and a divider circuit. The imager array has columns and rows of pixels. The switching network is adapted to receive pixel signals from the image array. The plurality of computation elements operates to compute inner products for at least x and y centroids. The plurality of computation elements has only passive elements to provide inner products of pixel signals the switching network. The divider circuit is adapted to receive the inner products and compute the x and y centroids.
High-speed on-chip windowed centroiding using photodiode-based CMOS imager
NASA Technical Reports Server (NTRS)
Pain, Bedabrata (Inventor); Sun, Chao (Inventor); Yang, Guang (Inventor); Cunningham, Thomas J. (Inventor); Hancock, Bruce (Inventor)
2004-01-01
A centroid computation system is disclosed. The system has an imager array, a switching network, computation elements, and a divider circuit. The imager array has columns and rows of pixels. The switching network is adapted to receive pixel signals from the image array. The plurality of computation elements operates to compute inner products for at least x and y centroids. The plurality of computation elements has only passive elements to provide inner products of pixel signals the switching network. The divider circuit is adapted to receive the inner products and compute the x and y centroids.
New Techniques for High-Contrast Imaging with ADI: The ACORNS-ADI SEEDS Data Reduction Pipeline
NASA Technical Reports Server (NTRS)
Brandt, Timothy D.; McElwain, Michael W.; Turner, Edwin L.; Abe, L.; Brandner, W.; Carson, J.; Egner, S.; Feldt, M.; Golota, T.; Grady, C. A.;
2012-01-01
We describe Algorithms for Calibration, Optimized Registration, and Nulling the Star in Angular Differential Imaging (ACORNS-ADI), a new, parallelized software package to reduce high-contrast imaging data, and its application to data from the Strategic Exploration of Exoplanets and Disks (SEEDS) survey. We implement seyeral new algorithms, includbg a method to centroid saturated images, a trimmed mean for combining an image sequence that reduces noise by up to approx 20%, and a robust and computationally fast method to compute the sensitivitv of a high-contrast obsen-ation everywhere on the field-of-view without introducing artificial sources. We also include a description of image processing steps to remove electronic artifacts specific to Hawaii2-RG detectors like the one used for SEEDS, and a detailed analysis of the Locally Optimized Combination of Images (LOCI) algorithm commonly used to reduce high-contrast imaging data. ACORNS-ADI is efficient and open-source, and includes several optional features which may improve performance on data from other instruments. ACORNS-ADI is freely available for download at www.github.com/t-brandt/acorns_-adi under a BSD license
Orr, Lindsay; Hernández de la Peña, Lisandro; Roy, Pierre-Nicholas
2017-06-07
A derivation of quantum statistical mechanics based on the concept of a Feynman path centroid is presented for the case of generalized density operators using the projected density operator formalism of Blinov and Roy [J. Chem. Phys. 115, 7822-7831 (2001)]. The resulting centroid densities, centroid symbols, and centroid correlation functions are formulated and analyzed in the context of the canonical equilibrium picture of Jang and Voth [J. Chem. Phys. 111, 2357-2370 (1999)]. The case where the density operator projects onto a particular energy eigenstate of the system is discussed, and it is shown that one can extract microcanonical dynamical information from double Kubo transformed correlation functions. It is also shown that the proposed projection operator approach can be used to formally connect the centroid and Wigner phase-space distributions in the zero reciprocal temperature β limit. A Centroid Molecular Dynamics (CMD) approximation to the state-projected exact quantum dynamics is proposed and proven to be exact in the harmonic limit. The state projected CMD method is also tested numerically for a quartic oscillator and a double-well potential and found to be more accurate than canonical CMD. In the case of a ground state projection, this method can resolve tunnelling splittings of the double well problem in the higher barrier regime where canonical CMD fails. Finally, the state-projected CMD framework is cast in a path integral form.
NASA Astrophysics Data System (ADS)
Orr, Lindsay; Hernández de la Peña, Lisandro; Roy, Pierre-Nicholas
2017-06-01
A derivation of quantum statistical mechanics based on the concept of a Feynman path centroid is presented for the case of generalized density operators using the projected density operator formalism of Blinov and Roy [J. Chem. Phys. 115, 7822-7831 (2001)]. The resulting centroid densities, centroid symbols, and centroid correlation functions are formulated and analyzed in the context of the canonical equilibrium picture of Jang and Voth [J. Chem. Phys. 111, 2357-2370 (1999)]. The case where the density operator projects onto a particular energy eigenstate of the system is discussed, and it is shown that one can extract microcanonical dynamical information from double Kubo transformed correlation functions. It is also shown that the proposed projection operator approach can be used to formally connect the centroid and Wigner phase-space distributions in the zero reciprocal temperature β limit. A Centroid Molecular Dynamics (CMD) approximation to the state-projected exact quantum dynamics is proposed and proven to be exact in the harmonic limit. The state projected CMD method is also tested numerically for a quartic oscillator and a double-well potential and found to be more accurate than canonical CMD. In the case of a ground state projection, this method can resolve tunnelling splittings of the double well problem in the higher barrier regime where canonical CMD fails. Finally, the state-projected CMD framework is cast in a path integral form.
NASA Astrophysics Data System (ADS)
Pathak, Maharshi
City administrators and real-estate developers have been setting up rather aggressive energy efficiency targets. This, in turn, has led the building science research groups across the globe to focus on urban scale building performance studies and level of abstraction associated with the simulations of the same. The increasing maturity of the stakeholders towards energy efficiency and creating comfortable working environment has led researchers to develop methodologies and tools for addressing the policy driven interventions whether it's urban level energy systems, buildings' operational optimization or retrofit guidelines. Typically, these large-scale simulations are carried out by grouping buildings based on their design similarities i.e. standardization of the buildings. Such an approach does not necessarily lead to potential working inputs which can make decision-making effective. To address this, a novel approach is proposed in the present study. The principle objective of this study is to propose, to define and evaluate the methodology to utilize machine learning algorithms in defining representative building archetypes for the Stock-level Building Energy Modeling (SBEM) which are based on operational parameter database. The study uses "Phoenix- climate" based CBECS-2012 survey microdata for analysis and validation. Using the database, parameter correlations are studied to understand the relation between input parameters and the energy performance. Contrary to precedence, the study establishes that the energy performance is better explained by the non-linear models. The non-linear behavior is explained by advanced learning algorithms. Based on these algorithms, the buildings at study are grouped into meaningful clusters. The cluster "mediod" (statistically the centroid, meaning building that can be represented as the centroid of the cluster) are established statistically to identify the level of abstraction that is acceptable for the whole building energy simulations and post that the retrofit decision-making. Further, the methodology is validated by conducting Monte-Carlo simulations on 13 key input simulation parameters. The sensitivity analysis of these 13 parameters is utilized to identify the optimum retrofits. From the sample analysis, the envelope parameters are found to be more sensitive towards the EUI of the building and thus retrofit packages should also be directed to maximize the energy usage reduction.
Pokhrel, Damodar; Murphy, Martin J; Todor, Dorin A; Weiss, Elisabeth; Williamson, Jeffrey F
2011-01-01
To generalize and experimentally validate a novel algorithm for reconstructing the 3D pose (position and orientation) of implanted brachytherapy seeds from a set of a few measured 2D cone-beam CT (CBCT) x-ray projections. The iterative forward projection matching (IFPM) algorithm was generalized to reconstruct the 3D pose, as well as the centroid, of brachytherapy seeds from three to ten measured 2D projections. The gIFPM algorithm finds the set of seed poses that minimizes the sum-of-squared-difference of the pixel-by-pixel intensities between computed and measured autosegmented radiographic projections of the implant. Numerical simulations of clinically realistic brachytherapy seed configurations were performed to demonstrate the proof of principle. An in-house machined brachytherapy phantom, which supports precise specification of seed position and orientation at known values for simulated implant geometries, was used to experimentally validate this algorithm. The phantom was scanned on an ACUITY CBCT digital simulator over a full 660 sinogram projections. Three to ten x-ray images were selected from the full set of CBCT sinogram projections and postprocessed to create binary seed-only images. In the numerical simulations, seed reconstruction position and orientation errors were approximately 0.6 mm and 5 degrees, respectively. The physical phantom measurements demonstrated an absolute positional accuracy of (0.78 +/- 0.57) mm or less. The theta and phi angle errors were found to be (5.7 +/- 4.9) degrees and (6.0 +/- 4.1) degrees, respectively, or less when using three projections; with six projections, results were slightly better. The mean registration error was better than 1 mm/6 degrees compared to the measured seed projections. Each test trial converged in 10-20 iterations with computation time of 12-18 min/iteration on a 1 GHz processor. This work describes a novel, accurate, and completely automatic method for reconstructing seed orientations, as well as centroids, from a small number of radiographic projections, in support of intraoperative planning and adaptive replanning. Unlike standard back-projection methods, gIFPM avoids the need to match corresponding seed images on the projections. This algorithm also successfully reconstructs overlapping clustered and highly migrated seeds in the implant. The accuracy of better than 1 mm and 6 degrees demonstrates that gIFPM has the potential to support 2D Task Group 43 calculations in clinical practice.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pokhrel, Damodar; Murphy, Martin J.; Todor, Dorin A.
2011-01-15
Purpose: To generalize and experimentally validate a novel algorithm for reconstructing the 3D pose (position and orientation) of implanted brachytherapy seeds from a set of a few measured 2D cone-beam CT (CBCT) x-ray projections. Methods: The iterative forward projection matching (IFPM) algorithm was generalized to reconstruct the 3D pose, as well as the centroid, of brachytherapy seeds from three to ten measured 2D projections. The gIFPM algorithm finds the set of seed poses that minimizes the sum-of-squared-difference of the pixel-by-pixel intensities between computed and measured autosegmented radiographic projections of the implant. Numerical simulations of clinically realistic brachytherapy seed configurations weremore » performed to demonstrate the proof of principle. An in-house machined brachytherapy phantom, which supports precise specification of seed position and orientation at known values for simulated implant geometries, was used to experimentally validate this algorithm. The phantom was scanned on an ACUITY CBCT digital simulator over a full 660 sinogram projections. Three to ten x-ray images were selected from the full set of CBCT sinogram projections and postprocessed to create binary seed-only images. Results: In the numerical simulations, seed reconstruction position and orientation errors were approximately 0.6 mm and 5 deg., respectively. The physical phantom measurements demonstrated an absolute positional accuracy of (0.78{+-}0.57) mm or less. The {theta} and {phi} angle errors were found to be (5.7{+-}4.9) deg. and (6.0{+-}4.1) deg., respectively, or less when using three projections; with six projections, results were slightly better. The mean registration error was better than 1 mm/6 deg. compared to the measured seed projections. Each test trial converged in 10-20 iterations with computation time of 12-18 min/iteration on a 1 GHz processor. Conclusions: This work describes a novel, accurate, and completely automatic method for reconstructing seed orientations, as well as centroids, from a small number of radiographic projections, in support of intraoperative planning and adaptive replanning. Unlike standard back-projection methods, gIFPM avoids the need to match corresponding seed images on the projections. This algorithm also successfully reconstructs overlapping clustered and highly migrated seeds in the implant. The accuracy of better than 1 mm and 6 deg. demonstrates that gIFPM has the potential to support 2D Task Group 43 calculations in clinical practice.« less
Special Session on Adaptive Optics in Russia and China. Volume 23
1995-01-01
Fisica Aplicada. Universidad de Cantabria. 39005. SANTANDER. SPAIN. Tel: 42-201445. Fax: 42-201402. 1.- INTRODUCTION To estimate the centroid of a light...Lia M. Zerbino ##, Eduardo Aguirre +, Anibal P. Laquidara ++ and Mario-Garavaglia ft. Centro de Investigaciones Opticas (CIOp) CC 124 Correo Central...Professor at Universidad Nacional dt La Plata and CONICET Engiuneer. C (lOp belongs to Consejo Nacional de Investigaciones Cientificas y Ticnicas (CONICET
Martial arts striking hand peak acceleration, accuracy and consistency.
Neto, Osmar Pinto; Marzullo, Ana Carolina De Miranda; Bolander, Richard P; Bir, Cynthia A
2013-01-01
The goal of this paper was to investigate the possible trade-off between peak hand acceleration and accuracy and consistency of hand strikes performed by martial artists of different training experiences. Ten male martial artists with training experience ranging from one to nine years volunteered to participate in the experiment. Each participant performed 12 maximum effort goal-directed strikes. Hand acceleration during the strikes was obtained using a tri-axial accelerometer block. A pressure sensor matrix was used to determine the accuracy and consistency of the strikes. Accuracy was estimated by the radial distance between the centroid of each subject's 12 strikes and the target, whereas consistency was estimated by the square root of the 12 strikes mean squared distance from their centroid. We found that training experience was significantly correlated to hand peak acceleration prior to impact (r(2)=0.456, p =0.032) and accuracy (r(2)=0. 621, p=0.012). These correlations suggest that more experienced participants exhibited higher hand peak accelerations and at the same time were more accurate. Training experience, however, was not correlated to consistency (r(2)=0.085, p=0.413). Overall, our results suggest that martial arts training may lead practitioners to achieve higher striking hand accelerations with better accuracy and no change in striking consistency.
The Height of a White-Light Flare and its Hard X-Ray Sources
NASA Technical Reports Server (NTRS)
Oliveros, Juan-Carlos Martinez; Hudson, Hugh S.; Hurford, Gordon J.; Kriucker, Saem; Lin, R. P.; Lindsey, Charles; Couvidat, Sebastien; Schou, Jesper; Thompson, W. T.
2012-01-01
We describe observations of a white-light (WL) flare (SOL2011-02-24T07:35:00, M3.5) close to the limb of the Sun, from which we obtain estimates of the heights of the optical continuum sources and those of the associated hard X-ray (HXR) sources. For this purpose, we use HXR images from the Reuven Ramaty High Energy Spectroscopic Imager and optical images at 6173 Ang. from the Solar Dynamics Observatory.We find that the centroids of the impulsive-phase emissions in WL and HXRs (30 -80 keV) match closely in central distance (angular displacement from Sun center), within uncertainties of order 0".2. This directly implies a common source height for these radiations, strengthening the connection between visible flare continuum formation and the accelerated electrons. We also estimate the absolute heights of these emissions as vertical distances from Sun center. Such a direct estimation has not been done previously, to our knowledge. Using a simultaneous 195 Ang. image from the Solar-Terrestrial RElations Observatory spacecraft to identify the heliographic coordinates of the flare footpoints, we determine mean heights above the photosphere (as normally defined; tau = 1 at 5000 Ang.) of 305 +/- 170 km and 195 +/- 70 km, respectively, for the centroids of the HXR and WL footpoint sources of the flare. These heights are unexpectedly low in the atmosphere, and are consistent with the expected locations of tau = 1 for the 6173 Ang and the approx 40 keV photons observed, respectively.
Regularization of Instantaneous Frequency Attribute Computations
NASA Astrophysics Data System (ADS)
Yedlin, M. J.; Margrave, G. F.; Van Vorst, D. G.; Ben Horin, Y.
2014-12-01
We compare two different methods of computation of a temporally local frequency:1) A stabilized instantaneous frequency using the theory of the analytic signal.2) A temporally variant centroid (or dominant) frequency estimated from a time-frequency decomposition.The first method derives from Taner et al (1979) as modified by Fomel (2007) and utilizes the derivative of the instantaneous phase of the analytic signal. The second method computes the power centroid (Cohen, 1995) of the time-frequency spectrum, obtained using either the Gabor or Stockwell Transform. Common to both methods is the necessity of division by a diagonal matrix, which requires appropriate regularization.We modify Fomel's (2007) method by explicitly penalizing the roughness of the estimate. Following Farquharson and Oldenburg (2004), we employ both the L curve and GCV methods to obtain the smoothest model that fits the data in the L2 norm.Using synthetic data, quarry blast, earthquakes and the DPRK tests, our results suggest that the optimal method depends on the data. One of the main applications for this work is the discrimination between blast events and earthquakesFomel, Sergey. " Local seismic attributes." , Geophysics, 72.3 (2007): A29-A33.Cohen, Leon. " Time frequency analysis theory and applications." USA: Prentice Hall, (1995).Farquharson, Colin G., and Douglas W. Oldenburg. "A comparison of automatic techniques for estimating the regularization parameter in non-linear inverse problems." Geophysical Journal International 156.3 (2004): 411-425.Taner, M. Turhan, Fulton Koehler, and R. E. Sheriff. " Complex seismic trace analysis." Geophysics, 44.6 (1979): 1041-1063.
June and August median streamflows estimated for ungaged streams in southern Maine
Lombard, Pamela J.
2010-01-01
Methods for estimating June and August median streamflows were developed for ungaged, unregulated streams in southern Maine. The methods apply to streams with drainage areas ranging in size from 0.4 to 74 square miles, with percentage of basin underlain by a sand and gravel aquifer ranging from 0 to 84 percent, and with distance from the centroid of the basin to a Gulf of Maine line paralleling the coast ranging from 14 to 94 miles. Equations were developed with data from 4 long-term continuous-record streamgage stations and 27 partial-record streamgage stations. Estimates of median streamflows at the continuous-record and partial-record stations are presented. A mathematical technique for estimating standard low-flow statistics, such as June and August median streamflows, at partial-record streamgage stations was applied by relating base-flow measurements at these stations to concurrent daily streamflows at nearby long-term (at least 10 years of record) continuous-record streamgage stations (index stations). Weighted least-squares regression analysis (WLS) was used to relate estimates of June and August median streamflows at streamgage stations to basin characteristics at these same stations to develop equations that can be used to estimate June and August median streamflows on ungaged streams. WLS accounts for different periods of record at the gaging stations. Three basin characteristics-drainage area, percentage of basin underlain by a sand and gravel aquifer, and distance from the centroid of the basin to a Gulf of Maine line paralleling the coast-are used in the final regression equation to estimate June and August median streamflows for ungaged streams. The three-variable equation to estimate June median streamflow has an average standard error of prediction from -35 to 54 percent. The three-variable equation to estimate August median streamflow has an average standard error of prediction from -45 to 83 percent. Simpler one-variable equations that use only drainage area to estimate June and August median streamflows were developed for use when less accuracy is acceptable. These equations have average standard errors of prediction from -46 to 87 percent and from -57 to 133 percent, respectively.
Shang, Fengjun; Jiang, Yi; Xiong, Anping; Su, Wen; He, Li
2016-11-18
With the integrated development of the Internet, wireless sensor technology, cloud computing, and mobile Internet, there has been a lot of attention given to research about and applications of the Internet of Things. A Wireless Sensor Network (WSN) is one of the important information technologies in the Internet of Things; it integrates multi-technology to detect and gather information in a network environment by mutual cooperation, using a variety of methods to process and analyze data, implement awareness, and perform tests. This paper mainly researches the localization algorithm of sensor nodes in a wireless sensor network. Firstly, a multi-granularity region partition is proposed to divide the location region. In the range-based method, the RSSI (Received Signal Strength indicator, RSSI) is used to estimate distance. The optimal RSSI value is computed by the Gaussian fitting method. Furthermore, a Voronoi diagram is characterized by the use of dividing region. Rach anchor node is regarded as the center of each region; the whole position region is divided into several regions and the sub-region of neighboring nodes is combined into triangles while the unknown node is locked in the ultimate area. Secondly, the multi-granularity regional division and Lagrange multiplier method are used to calculate the final coordinates. Because nodes are influenced by many factors in the practical application, two kinds of positioning methods are designed. When the unknown node is inside positioning unit, we use the method of vector similarity. Moreover, we use the centroid algorithm to calculate the ultimate coordinates of unknown node. When the unknown node is outside positioning unit, we establish a Lagrange equation containing the constraint condition to calculate the first coordinates. Furthermore, we use the Taylor expansion formula to correct the coordinates of the unknown node. In addition, this localization method has been validated by establishing the real environment.
Kempka, Martin; Sjödahl, Johan; Björk, Anders; Roeraade, Johan
2004-01-01
A method for peak picking for matrix-assisted laser desorption/ionization time-of-flight mass spectrometry (MALDI-TOFMS) is described. The method is based on the assumption that two sets of ions are formed during the ionization stage, which have Gaussian distributions but different velocity profiles. This gives rise to a certain degree of peak skewness. Our algorithm deconvolutes the peak and utilizes the fast velocity, bulk ion distribution for peak picking. Evaluation of the performance of the new method was conducted using peptide peaks from a bovine serum albumin (BSA) digest, and compared with the commercial peak-picking algorithms Centroid and SNAP. When using the new two-Gaussian algorithm, for strong signals the mass accuracy was equal to or marginally better than the results obtained from the commercial algorithms. However, for weak, distorted peaks, considerable improvement in both mass accuracy and precision was obtained. This improvement should be particularly useful in proteomics, where a lack of signal strength is often encountered when dealing with weakly expressed proteins. Finally, since the new peak-picking method uses information from the entire signal, no adjustments of parameters related to peak height have to be made, which simplifies its practical use. Copyright 2004 John Wiley & Sons, Ltd.
Characterization of trabecular bone using the backscattered spectral centroid shift.
Wear, Keith A
2003-04-01
Ultrasonic attenuation in bone in vivo is generally measured using a through-transmission method at the calcaneus. Although attenuation in calcaneus has been demonstrated to be a useful predictor for osteoporotic fracture risk, measurements at other clinically important sites, such as hip and spine, could potentially contain additional useful diagnostic information. Through-transmission measurements may not be feasible at these sites due to complex bone shapes and the increased amount of intervening soft tissue. Centroid shift from the backscattered signal is an index of attenuation slope and has been used previously to characterize soft tissues. In this paper, centroid shift from signals backscattered from 30 trabecular bone samples in vitro were measured. Attenuation slope also was measured using a through-transmission method. The correlation coefficient between centroid shift and attenuation slope was -0.71. The 95% confidence interval was (-0.86, -0.47). These results suggest that the backscattered spectral centroid shift may contain useful diagnostic information potentially applicable to hip and spine.
Garra, Brian S; Locher, Melanie; Felker, Steven; Wear, Keith A
2009-01-01
Ultrasonic backscatter measurements from vertebral bodies (L3 and L4) in nine women were performed using a clinical ultrasonic imaging system. Measurements were made through the abdomen. The location of a vertebra was identified from the bright specular reflection from the vertebral anterior surface. Backscattered signals were gated to isolate signal emanating from the cancellous interiors of vertebrae. The spectral centroid shift of the backscattered signal, which has previously been shown to correlate highly with bone mineral density (BMD) in human calcaneus in vitro, was measured. BMD was also measured in the nine subjects' vertebrae using a clinical bone densitometer. The correlation coefficient between centroid shift and BMD was r = -0.61. The slope of the linear fit was -160 kHz / (g/cm(2)). The negative slope was expected because the attenuation coefficient (and therefore magnitude of the centroid downshift) is known from previous studies to increase with BMD. The centroid shift may be a useful parameter for characterizing bone in vivo.
Analysis of energy-based algorithms for RNA secondary structure prediction
2012-01-01
Background RNA molecules play critical roles in the cells of organisms, including roles in gene regulation, catalysis, and synthesis of proteins. Since RNA function depends in large part on its folded structures, much effort has been invested in developing accurate methods for prediction of RNA secondary structure from the base sequence. Minimum free energy (MFE) predictions are widely used, based on nearest neighbor thermodynamic parameters of Mathews, Turner et al. or those of Andronescu et al. Some recently proposed alternatives that leverage partition function calculations find the structure with maximum expected accuracy (MEA) or pseudo-expected accuracy (pseudo-MEA) methods. Advances in prediction methods are typically benchmarked using sensitivity, positive predictive value and their harmonic mean, namely F-measure, on datasets of known reference structures. Since such benchmarks document progress in improving accuracy of computational prediction methods, it is important to understand how measures of accuracy vary as a function of the reference datasets and whether advances in algorithms or thermodynamic parameters yield statistically significant improvements. Our work advances such understanding for the MFE and (pseudo-)MEA-based methods, with respect to the latest datasets and energy parameters. Results We present three main findings. First, using the bootstrap percentile method, we show that the average F-measure accuracy of the MFE and (pseudo-)MEA-based algorithms, as measured on our largest datasets with over 2000 RNAs from diverse families, is a reliable estimate (within a 2% range with high confidence) of the accuracy of a population of RNA molecules represented by this set. However, average accuracy on smaller classes of RNAs such as a class of 89 Group I introns used previously in benchmarking algorithm accuracy is not reliable enough to draw meaningful conclusions about the relative merits of the MFE and MEA-based algorithms. Second, on our large datasets, the algorithm with best overall accuracy is a pseudo MEA-based algorithm of Hamada et al. that uses a generalized centroid estimator of base pairs. However, between MFE and other MEA-based methods, there is no clear winner in the sense that the relative accuracy of the MFE versus MEA-based algorithms changes depending on the underlying energy parameters. Third, of the four parameter sets we considered, the best accuracy for the MFE-, MEA-based, and pseudo-MEA-based methods is 0.686, 0.680, and 0.711, respectively (on a scale from 0 to 1 with 1 meaning perfect structure predictions) and is obtained with a thermodynamic parameter set obtained by Andronescu et al. called BL* (named after the Boltzmann likelihood method by which the parameters were derived). Conclusions Large datasets should be used to obtain reliable measures of the accuracy of RNA structure prediction algorithms, and average accuracies on specific classes (such as Group I introns and Transfer RNAs) should be interpreted with caution, considering the relatively small size of currently available datasets for such classes. The accuracy of the MEA-based methods is significantly higher when using the BL* parameter set of Andronescu et al. than when using the parameters of Mathews and Turner, and there is no significant difference between the accuracy of MEA-based methods and MFE when using the BL* parameters. The pseudo-MEA-based method of Hamada et al. with the BL* parameter set significantly outperforms all other MFE and MEA-based algorithms on our large data sets. PMID:22296803
Analysis of energy-based algorithms for RNA secondary structure prediction.
Hajiaghayi, Monir; Condon, Anne; Hoos, Holger H
2012-02-01
RNA molecules play critical roles in the cells of organisms, including roles in gene regulation, catalysis, and synthesis of proteins. Since RNA function depends in large part on its folded structures, much effort has been invested in developing accurate methods for prediction of RNA secondary structure from the base sequence. Minimum free energy (MFE) predictions are widely used, based on nearest neighbor thermodynamic parameters of Mathews, Turner et al. or those of Andronescu et al. Some recently proposed alternatives that leverage partition function calculations find the structure with maximum expected accuracy (MEA) or pseudo-expected accuracy (pseudo-MEA) methods. Advances in prediction methods are typically benchmarked using sensitivity, positive predictive value and their harmonic mean, namely F-measure, on datasets of known reference structures. Since such benchmarks document progress in improving accuracy of computational prediction methods, it is important to understand how measures of accuracy vary as a function of the reference datasets and whether advances in algorithms or thermodynamic parameters yield statistically significant improvements. Our work advances such understanding for the MFE and (pseudo-)MEA-based methods, with respect to the latest datasets and energy parameters. We present three main findings. First, using the bootstrap percentile method, we show that the average F-measure accuracy of the MFE and (pseudo-)MEA-based algorithms, as measured on our largest datasets with over 2000 RNAs from diverse families, is a reliable estimate (within a 2% range with high confidence) of the accuracy of a population of RNA molecules represented by this set. However, average accuracy on smaller classes of RNAs such as a class of 89 Group I introns used previously in benchmarking algorithm accuracy is not reliable enough to draw meaningful conclusions about the relative merits of the MFE and MEA-based algorithms. Second, on our large datasets, the algorithm with best overall accuracy is a pseudo MEA-based algorithm of Hamada et al. that uses a generalized centroid estimator of base pairs. However, between MFE and other MEA-based methods, there is no clear winner in the sense that the relative accuracy of the MFE versus MEA-based algorithms changes depending on the underlying energy parameters. Third, of the four parameter sets we considered, the best accuracy for the MFE-, MEA-based, and pseudo-MEA-based methods is 0.686, 0.680, and 0.711, respectively (on a scale from 0 to 1 with 1 meaning perfect structure predictions) and is obtained with a thermodynamic parameter set obtained by Andronescu et al. called BL* (named after the Boltzmann likelihood method by which the parameters were derived). Large datasets should be used to obtain reliable measures of the accuracy of RNA structure prediction algorithms, and average accuracies on specific classes (such as Group I introns and Transfer RNAs) should be interpreted with caution, considering the relatively small size of currently available datasets for such classes. The accuracy of the MEA-based methods is significantly higher when using the BL* parameter set of Andronescu et al. than when using the parameters of Mathews and Turner, and there is no significant difference between the accuracy of MEA-based methods and MFE when using the BL* parameters. The pseudo-MEA-based method of Hamada et al. with the BL* parameter set significantly outperforms all other MFE and MEA-based algorithms on our large data sets.
Identification of hydrometeor mixtures in polarimetric radar measurements and their linear de-mixing
NASA Astrophysics Data System (ADS)
Besic, Nikola; Ventura, Jordi Figueras i.; Grazioli, Jacopo; Gabella, Marco; Germann, Urs; Berne, Alexis
2017-04-01
The issue of hydrometeor mixtures affects radar sampling volumes without a clear dominant hydrometeor type. Containing a number of different hydrometeor types which significantly contribute to the polarimetric variables, these volumes are likely to occur in the vicinity of the melting layer and mainly, at large distance from a given radar. Motivated by potential benefits for both quantitative and qualitative applications of dual-pol radar, we propose a method for the identification of hydrometeor mixtures and their subsequent linear de-mixing. This method is intrinsically related to our recently proposed semi-supervised approach for hydrometeor classification. The mentioned classification approach [1] performs labeling of radar sampling volumes by using as a criterion the Euclidean distance with respect to five-dimensional centroids, depicting nine hydrometeor classes. The positions of the centroids in the space formed by four radar moments and one external parameter (phase indicator), are derived through a technique of k-medoids clustering, applied on a selected representative set of radar observations, and coupled with statistical testing which introduces the assumed microphysical properties of the different hydrometeor types. Aside from a hydrometeor type label, each radar sampling volume is characterized by an entropy estimate, indicating the uncertainty of the classification. Here, we revisit the concept of entropy presented in [1], in order to emphasize its presumed potential for the identification of hydrometeor mixtures. The calculation of entropy is based on the estimate of the probability (pi ) that the observation corresponds to the hydrometeor type i (i = 1,ṡṡṡ9) . The probability is derived from the Euclidean distance (di ) of the observation to the centroid characterizing the hydrometeor type i . The parametrization of the d → p transform is conducted in a controlled environment, using synthetic polarimetric radar datasets. It ensures balanced entropy values: low for pure volumes, and high for different possible combinations of mixed hydrometeors. The parametrized entropy is further on applied to real polarimetric C and X band radar datasets, where we demonstrate the potential of linear de-mixing using a simplex formed by a set of pre-defined centroids in the five-dimensional space. As main outcome, the proposed approach allows to provide plausible proportions of the different hydrometeors contained in a given radar sampling volume. [1] Besic, N., Figueras i Ventura, J., Grazioli, J., Gabella, M., Germann, U., and Berne, A.: Hydrometeor classification through statistical clustering of polarimetric radar measurements: a semi-supervised approach, Atmos. Meas. Tech., 9, 4425-4445, doi:10.5194/amt-9-4425-2016, 2016.
Estimation of selected seasonal streamflow statistics representative of 1930-2002 in West Virginia
Wiley, Jeffrey B.; Atkins, John T.
2010-01-01
Regional equations and procedures were developed for estimating seasonal 1-day 10-year, 7-day 10-year, and 30-day 5-year hydrologically based low-flow frequency values for unregulated streams in West Virginia. Regional equations and procedures also were developed for estimating the seasonal U.S. Environmental Protection Agency harmonic-mean flows and the 50-percent flow-duration values. The seasons were defined as winter (January 1-March 31), spring (April 1-June 30), summer (July 1-September 30), and fall (October 1-December 31). Regional equations were developed using ordinary least squares regression using statistics from 117 U.S. Geological Survey continuous streamgage stations as dependent variables and basin characteristics as independent variables. Equations for three regions in West Virginia-North, South-Central, and Eastern Panhandle Regions-were determined. Drainage area, average annual precipitation, and longitude of the basin centroid are significant independent variables in one or more of the equations. The average standard error of estimates for the equations ranged from 12.6 to 299 percent. Procedures developed to estimate the selected seasonal streamflow statistics in this study are applicable only to rural, unregulated streams within the boundaries of West Virginia that have independent variables within the limits of the stations used to develop the regional equations: drainage area from 16.3 to 1,516 square miles in the North Region, from 2.78 to 1,619 square miles in the South-Central Region, and from 8.83 to 3,041 square miles in the Eastern Panhandle Region; average annual precipitation from 42.3 to 61.4 inches in the South-Central Region and from 39.8 to 52.9 inches in the Eastern Panhandle Region; and longitude of the basin centroid from 79.618 to 82.023 decimal degrees in the North Region. All estimates of seasonal streamflow statistics are representative of the period from the 1930 to the 2002 climatic year.
Ferreira, Tiago B; Ribeiro, Paulo; Ribeiro, Filomena J; O'Neill, João G
2017-12-01
To compare the prediction error in the calculation of toric intraocular lenses (IOLs) associated with methods that estimate the power of the posterior corneal surface (ie, Barrett toric calculator and Abulafia-Koch formula) with that of methods that consider real measures obtained using Scheimpflug imaging: a software that uses vectorial calculation (Panacea toric calculator: http://www.panaceaiolandtoriccalculator.com) and a ray tracing software (PhacoOptics, Aarhus Nord, Denmark). In 107 eyes of 107 patients undergoing cataract surgery with toric IOL implantation (Acrysof IQ Toric; Alcon Laboratories, Inc., Fort Worth, TX), predicted residual astigmatism by each calculation method was compared with manifest refractive astigmatism. Prediction error in residual astigmatism was calculated using vector analysis. All calculation methods resulted in overcorrection of with-the-rule astigmatism and undercorrection of against-the-rule astigmatism. Both estimation methods resulted in lower mean and centroid astigmatic prediction errors, and a larger number of eyes within 0.50 diopters (D) of absolute prediction error than methods considering real measures (P < .001). Centroid prediction error (CPE) was 0.07 D at 172° for the Barrett toric calculator and 0.13 D at 174° for the Abulafia-Koch formula (combined with Holladay calculator). For methods using real posterior corneal surface measurements, CPE was 0.25 D at 173° for the Panacea calculator and 0.29 D at 171° for the ray tracing software. The Barrett toric calculator and Abulafia-Koch formula yielded the lowest astigmatic prediction errors. Directly evaluating total corneal power for toric IOL calculation was not superior to estimating it. [J Refract Surg. 2017;33(12):794-800.]. Copyright 2017, SLACK Incorporated.
Quantification of Uncertainty in Full-Waveform Moment Tensor Inversion for Regional Seismicity
NASA Astrophysics Data System (ADS)
Jian, P.; Hung, S.; Tseng, T.
2013-12-01
Routinely and instantaneously determined moment tensor solutions deliver basic information for investigating faulting nature of earthquakes and regional tectonic structure. The accuracy of full-waveform moment tensor inversion mostly relies on azimuthal coverage of stations, data quality and previously known earth's structure (i.e., impulse responses or Green's functions). However, intrinsically imperfect station distribution, noise-contaminated waveform records and uncertain earth structure can often result in large deviations of the retrieved source parameters from the true ones, which prohibits the use of routinely reported earthquake catalogs for further structural and tectonic interferences. Duputel et al. (2012) first systematically addressed the significance of statistical uncertainty estimation in earthquake source inversion and exemplified that the data covariance matrix, if prescribed properly to account for data dependence and uncertainty due to incomplete and erroneous data and hypocenter mislocation, cannot only be mapped onto the uncertainty estimate of resulting source parameters, but it also aids obtaining more stable and reliable results. Over the past decade, BATS (Broadband Array in Taiwan for Seismology) has steadily devoted to building up a database of good-quality centroid moment tensor (CMT) solutions for moderate to large magnitude earthquakes that occurred in Taiwan area. Because of the lack of the uncertainty quantification and reliability analysis, it remains controversial to use the reported CMT catalog directly for further investigation of regional tectonics, near-source strong ground motions, and seismic hazard assessment. In this study, we develop a statistical procedure to make quantitative and reliable estimates of uncertainty in regional full-waveform CMT inversion. The linearized inversion scheme adapting efficient estimation of the covariance matrices associated with oversampled noisy waveform data and errors of biased centroid positions is implemented and inspected for improving source parameter determination of regional seismicity in Taiwan. Synthetic inversion tests demonstrate the resolved moment tensors would better match the hypothetical CMT solutions, and tend to suppress unreal non-double-couple components and reduce the trade-off between focal mechanism and centroid depth if individual signal-to-noise ratios and correlation lengths for 3-component seismograms at each station and mislocation uncertainties are properly taken into account. We further testify the capability of our scheme in retrieving the robust CMT information for mid-sized (Mw~3.5) and offshore earthquakes in Taiwan, which offers immediate and broad applications in detailed modelling of regional stress field and deformation pattern and mapping of subsurface velocity structures.
NASA Technical Reports Server (NTRS)
Ying, Hao
1993-01-01
The fuzzy controllers studied in this paper are the ones that employ N trapezoidal-shaped members for input fuzzy sets, Zadeh fuzzy logic and a centroid defuzzification algorithm for output fuzzy set. The author analytically proves that the structure of the fuzzy controllers is the sum of a global nonlinear controller and a local nonlinear proportional-integral-like controller. If N approaches infinity, the global controller becomes a nonlinear controller while the local controller disappears. If linear control rules are used, the global controller becomes a global two-dimensional multilevel relay which approaches a global linear proportional-integral (PI) controller as N approaches infinity.
NASA Astrophysics Data System (ADS)
Patra, Rusha; Dutta, Pranab K.
2015-07-01
Reconstruction of the absorption coefficient of tissue with good contrast is of key importance in functional diffuse optical imaging. A hybrid approach using model-based iterative image reconstruction and a genetic algorithm is proposed to enhance the contrast of the reconstructed image. The proposed method yields an observed contrast of 98.4%, mean square error of 0.638×10-3, and object centroid error of (0.001 to 0.22) mm. Experimental validation of the proposed method has also been provided with tissue-like phantoms which shows a significant improvement in image quality and thus establishes the potential of the method for functional diffuse optical tomography reconstruction with continuous wave setup. A case study of finger joint imaging is illustrated as well to show the prospect of the proposed method in clinical diagnosis. The method can also be applied to the concentration measurement of a region of interest in a turbid medium.
Andreev, Victor P; Rejtar, Tomas; Chen, Hsuan-Shen; Moskovets, Eugene V; Ivanov, Alexander R; Karger, Barry L
2003-11-15
A new denoising and peak picking algorithm (MEND, matched filtration with experimental noise determination) for analysis of LC-MS data is described. The algorithm minimizes both random and chemical noise in order to determine MS peaks corresponding to sample components. Noise characteristics in the data set are experimentally determined and used for efficient denoising. MEND is shown to enable low-intensity peaks to be detected, thus providing additional useful information for sample analysis. The process of denoising, performed in the chromatographic time domain, does not distort peak shapes in the m/z domain, allowing accurate determination of MS peak centroids, including low-intensity peaks. MEND has been applied to denoising of LC-MALDI-TOF-MS and LC-ESI-TOF-MS data for tryptic digests of protein mixtures. MEND is shown to suppress chemical and random noise and baseline fluctuations, as well as filter out false peaks originating from the matrix (MALDI) or mobile phase (ESI). In addition, MEND is shown to be effective for protein expression analysis by allowing selection of a large number of differentially expressed ICAT pairs, due to increased signal-to-noise ratio and mass accuracy.
Visual based laser speckle pattern recognition method for structural health monitoring
NASA Astrophysics Data System (ADS)
Park, Kyeongtaek; Torbol, Marco
2017-04-01
This study performed the system identification of a target structure by analyzing the laser speckle pattern taken by a camera. The laser speckle pattern is generated by the diffuse reflection of the laser beam on a rough surface of the target structure. The camera, equipped with a red filter, records the scattered speckle particles of the laser light in real time and the raw speckle image of the pixel data is fed to the graphic processing unit (GPU) in the system. The algorithm for laser speckle contrast analysis (LASCA) computes: the laser speckle contrast images and the laser speckle flow images. The k-mean clustering algorithm is used to classify the pixels in each frame and the clusters' centroids, which function as virtual sensors, track the displacement between different frames in time domain. The fast Fourier transform (FFT) and the frequency domain decomposition (FDD) compute the modal properties of the structure: natural frequencies and damping ratios. This study takes advantage of the large scale computational capability of GPU. The algorithm is written in Compute Unifies Device Architecture (CUDA C) that allows the processing of speckle images in real time.
NASA Astrophysics Data System (ADS)
Liang, Cunren; Zeng, Qiming; Jia, Jianying; Jiao, Jian; Cui, Xi'ai
2013-02-01
Scanning synthetic aperture radar (ScanSAR) mode is an efficient way to map large scale geophysical phenomena at low cost. The work presented in this paper is dedicated to ScanSAR interferometric processing and its implementation by making full use of existing standard interferometric synthetic aperture radar (InSAR) software. We first discuss the properties of the ScanSAR signal and its phase-preserved focusing using the full aperture algorithm in terms of interferometry. Then a complete interferometric processing flow is proposed. The standard ScanSAR product is decoded subswath by subswath with burst gaps padded with zero-pulses, followed by a Doppler centroid frequency estimation for each subswath and a polynomial fit of all of the subswaths for the whole scene. The burst synchronization of the interferometric pair is then calculated, and only the synchronized pulses are kept for further interferometric processing. After the complex conjugate multiplication of the interferometric pair, the residual non-integer pulse repetition interval (PRI) part between adjacent bursts caused by zero padding is compensated by resampling using a sinc kernel. The subswath interferograms are then mosaicked, in which a method is proposed to remove the subswath discontinuities in the overlap area. Then the following interferometric processing goes back to the traditional stripmap processing flow. A processor written with C and Fortran languages and controlled by Perl scripts is developed to implement these algorithms and processing flow based on the JPL/Caltech Repeat Orbit Interferometry PACkage (ROI_PAC). Finally, we use the processor to process ScanSAR data from the Envisat and ALOS satellites and obtain large scale deformation maps in the radar line-of-sight (LOS) direction.
Factors related to the joint probability of flooding on paired streams
Koltun, G.F.; Sherwood, J.M.
1998-01-01
The factors related to the joint probabilty of flooding on paired streams were investigated and quantified to provide information to aid in the design of hydraulic structures where the joint probabilty of flooding is an element of the design criteria. Stream pairs were considered to have flooded jointly at the design-year flood threshold (corresponding to the 2-, 10-, 25-, or 50-year instantaneous peak streamflow) if peak streamflows at both streams in the pair were observed or predicted to have equaled or exceeded the threshold on a given calendar day. Daily mean streamflow data were used as a substitute for instantaneous peak streamflow data to determine which flood thresholds were equaled or exceeded on any given day. Instantaneous peak streamflow data, when available, were used preferentially to assess flood-threshold exceedance. Daily mean streamflow data for each stream were paired with concurrent daily mean streamflow data at the other streams. Observed probabilities of joint flooding, determined for the 2-, 10-, 25-, and 50-year flood thresholds, were computed as the ratios of the total number of days when streamflows at both streams concurrently equaled or exceeded their flood thresholds (events) to the total number of days where streamflows at either stream equaled or exceeded its flood threshold (trials). A combination of correlation analyses, graphical analyses, and logistic-regression analyses were used to identify and quantify factors associated with the observed probabilities of joint flooding (event-trial ratios). The analyses indicated that the distance between drainage area centroids, the ratio of the smaller to larger drainage area, the mean drainage area, and the centroid angle adjusted 30 degrees were the basin characteristics most closely associated with the joint probabilty of flooding on paired streams in Ohio. In general, the analyses indicated that the joint probabilty of flooding decreases with an increase in centroid distance and increases with increases in drainage area ratio, mean drainage area, and centroid angle adjusted 30 degrees. Logistic-regression equations were developed, which can be used to estimate the probability that streamflows at two streams jointly equal or exceed the 2-year flood threshold given that the streamflow at one of the two streams equals or exceeds the 2-year flood threshold. The logistic-regression equations are applicable to stream pairs in Ohio (and border areas of adjacent states) that are unregulated, free of significant urban influences, and have characteristics similar to those of the 304 gaged stream pairs used in the logistic-regression analyses. Contingency tables were constructed and analyzed to provide information about the bivariate distribution of floods on paired streams. The contingency tables showed that the percentage of trials in which both streams in the pair concurrently flood at identical recurrence-interval ranges generally increased as centroid distances decreased and was greatest for stream pairs with adjusted centroid angles greater than or equal to 60 degrees and drainage area ratios greater than or equal to 0.01. Also, as centroid distance increased, streamflow at one stream in the pair was more likely to be in a less than 2-year recurrence-interval range when streamflow at the second stream was in a 2-year or greater recurrence-interval range.
A BRIGHT SUBMILLIMETER SOURCE IN THE BULLET CLUSTER (1E0657-56) FIELD DETECTED WITH BLAST
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rex, Marie; Devlin, Mark J.; Dicker, Simon R.
2009-09-20
We present the 250, 350, and 500 {mu}m detection of bright submillimeter emission in the direction of the Bullet Cluster measured by the Balloon-borne Large-Aperture Submillimeter Telescope (BLAST). The 500 {mu}m centroid is coincident with an AzTEC 1.1 mm point-source detection at a position close to the peak lensing magnification produced by the cluster. However, the 250 {mu}m and 350 {mu}m centroids are elongated and shifted toward the south with a differential shift between bands that cannot be explained by pointing uncertainties. We therefore conclude that the BLAST detection is likely contaminated by emission from foreground galaxies associated with themore » Bullet Cluster. The submillimeter redshift estimate based on 250-1100 {mu}m photometry at the position of the AzTEC source is z{sub phot} = 2.9{sup +0.6}{sub -0.3}, consistent with the infrared color redshift estimation of the most likely Infrared Array Camera counterpart. These flux densities indicate an apparent far-infrared (FIR) luminosity of L{sub FIR} = 2 x 10{sup 13} L {sub sun}. When the amplification due to the gravitational lensing of the cluster is removed, the intrinsic FIR luminosity of the source is found to be L{sub FIR} <= 10{sup 12} L{sub sun}, consistent with typical luminous infrared galaxies.« less
Reducing the time requirement of k-means algorithm.
Osamor, Victor Chukwudi; Adebiyi, Ezekiel Femi; Oyelade, Jelilli Olarenwaju; Doumbia, Seydou
2012-01-01
Traditional k-means and most k-means variants are still computationally expensive for large datasets, such as microarray data, which have large datasets with large dimension size d. In k-means clustering, we are given a set of n data points in d-dimensional space R(d) and an integer k. The problem is to determine a set of k points in R(d), called centers, so as to minimize the mean squared distance from each data point to its nearest center. In this work, we develop a novel k-means algorithm, which is simple but more efficient than the traditional k-means and the recent enhanced k-means. Our new algorithm is based on the recently established relationship between principal component analysis and the k-means clustering. We provided the correctness proof for this algorithm. Results obtained from testing the algorithm on three biological data and six non-biological data (three of these data are real, while the other three are simulated) also indicate that our algorithm is empirically faster than other known k-means algorithms. We assessed the quality of our algorithm clusters against the clusters of a known structure using the Hubert-Arabie Adjusted Rand index (ARI(HA)). We found that when k is close to d, the quality is good (ARI(HA)>0.8) and when k is not close to d, the quality of our new k-means algorithm is excellent (ARI(HA)>0.9). In this paper, emphases are on the reduction of the time requirement of the k-means algorithm and its application to microarray data due to the desire to create a tool for clustering and malaria research. However, the new clustering algorithm can be used for other clustering needs as long as an appropriate measure of distance between the centroids and the members is used. This has been demonstrated in this work on six non-biological data.
Reducing the Time Requirement of k-Means Algorithm
Osamor, Victor Chukwudi; Adebiyi, Ezekiel Femi; Oyelade, Jelilli Olarenwaju; Doumbia, Seydou
2012-01-01
Traditional k-means and most k-means variants are still computationally expensive for large datasets, such as microarray data, which have large datasets with large dimension size d. In k-means clustering, we are given a set of n data points in d-dimensional space Rd and an integer k. The problem is to determine a set of k points in Rd, called centers, so as to minimize the mean squared distance from each data point to its nearest center. In this work, we develop a novel k-means algorithm, which is simple but more efficient than the traditional k-means and the recent enhanced k-means. Our new algorithm is based on the recently established relationship between principal component analysis and the k-means clustering. We provided the correctness proof for this algorithm. Results obtained from testing the algorithm on three biological data and six non-biological data (three of these data are real, while the other three are simulated) also indicate that our algorithm is empirically faster than other known k-means algorithms. We assessed the quality of our algorithm clusters against the clusters of a known structure using the Hubert-Arabie Adjusted Rand index (ARIHA). We found that when k is close to d, the quality is good (ARIHA>0.8) and when k is not close to d, the quality of our new k-means algorithm is excellent (ARIHA>0.9). In this paper, emphases are on the reduction of the time requirement of the k-means algorithm and its application to microarray data due to the desire to create a tool for clustering and malaria research. However, the new clustering algorithm can be used for other clustering needs as long as an appropriate measure of distance between the centroids and the members is used. This has been demonstrated in this work on six non-biological data. PMID:23239974
NASA Astrophysics Data System (ADS)
Bie, Lidong; Hicks, Stephen; Garth, Thomas; Gonzalez, Pablo; Rietbrock, Andreas
2018-06-01
On 25 November 2016, a Mw 6.6 earthquake ruptured the Muji fault in western Xinjiang, China. We investigate the earthquake rupture independently using geodetic observations from Interferometric Synthetic Aperture Radar (InSAR) and regional seismic recordings. To constrain the fault geometry and slip distribution, we test different combinations of fault dip and slip direction to reproduce InSAR observations. Both InSAR observations and optimal distributed slip model suggest buried rupture of two asperities separated by a gap of greater than 5 km. Additional seismic gaps exist at the end of both asperities that failed in the 2016 earthquake. To reveal the dynamic history of asperity failure, we inverted regional seismic waveforms for multiple centroid moment tensors and construct a moment rate function. The results show a small centroid time gap of 2.6 s between the two sub-events. Considering the >5 km gap between the two asperities and short time interval, we propose that the two asperities failed near-simultaneously, rather than in a cascading rupture propagation style. The second sub-event locates ∼39 km to the east of the epicenter and the centroid time is at 10.7 s. It leads to an estimate of average velocity of 3.7 km/s as an upper bound, consistent with upper crust shear wave velocity in this region. We interpret that the rupture front is propagating at sub-shear wave velocities, but that the second sub-event has a reduced or asymmetric rupture time, leading to the apparent near-simultaneous moment release of the two asperities.
NASA Astrophysics Data System (ADS)
Hejrani, Babak; Tkalčić, Hrvoje; Fichtner, Andreas
2017-07-01
Although both earthquake mechanism and 3-D Earth structure contribute to the seismic wavefield, the latter is usually assumed to be layered in source studies, which may limit the quality of the source estimate. To overcome this limitation, we implement a method that takes advantage of a 3-D heterogeneous Earth model, recently developed for the Australasian region. We calculate centroid moment tensors (CMTs) for earthquakes in Papua New Guinea (PNG) and the Solomon Islands. Our method is based on a library of Green's functions for each source-station pair for selected Geoscience Australia and Global Seismic Network stations in the region, and distributed on a 3-D grid covering the seismicity down to 50 km depth. For the calculation of Green's functions, we utilize a spectral-element method for the solution of the seismic wave equation. Seismic moment tensors were calculated using least squares inversion, and the 3-D location of the centroid is found by grid search. Through several synthetic tests, we confirm a trade-off between the location and the correct input moment tensor components when using a 1-D Earth model to invert synthetics produced in a 3-D heterogeneous Earth. Our CMT catalogue for PNG in comparison to the global CMT shows a meaningful increase in the double-couple percentage (up to 70%). Another significant difference that we observe is in the mechanism of events with depth shallower then 15 km and Mw < 6, which contributes to accurate tectonic interpretation of the region.
Concurrent Timbres in Orchestration: a Perceptual Study of Factors Determining "blend"
NASA Astrophysics Data System (ADS)
Sandell, Gregory John
Orchestration often involves selecting instruments for concurrent presentation, as in melodic doubling or chords. One evaluation of the aural outcome of such choices is along the continuum of "blend": whether the instruments fuse into a single composite timbre, segregate into distinct timbral entities, or fall somewhere in between the two extremes. This study investigates, through perceptual experimentation, the acoustical correlates of blend for 15 natural-sounding orchestral instruments presented in concurrently-sounding pairs (e.g. flute-cello, trumpet -oboe, etc.). Ratings of blend showed primary effects for centroid (the location of the midpoint of the spectral energy distribution) and duration of the onset for the tones. Lower average values of both centroid and onset duration for a pair of tones led to increased blends, as did closeness in value for the two factors. Blend decreased (instruments segregated) with higher average values or increased difference in value for the two factors. The musical interval of presentation slightly affected the relative importance of these two mechanisms, with unison intervals determined more by lower average centroid, and minor thirds determined more by closeness in centroid. The contribution of onset in general was slightly more pronounced in the unison conditions than in the minor third condition. Additional factors contributing to blend were correlation of amplitude and centroid envelopes (blend increased as temporal patterns rose and fell in synchrony) and similarity in the overall amount of fundamental frequency perturbation (decreased blend with increasing jitter from both tones). To confirm the importance of centroid as an independent factor determining blend, pairs of tones including instruments with artificially changed centroids were rated for blend. Judgments for several versions of the same instrument pair showed that blend decreased as the altered instrument increased in centroid, corroborating the earlier experiments. Other factors manipulated were amplitude level and the degree of inharmonicity. A survey of orchestration manuals showed many illustrations of "blending" combinations of instruments that were consistent with the results of these experiments. This study's acoustically-based guidelines for blend augment instance-based methods of traditional orchestration teaching, providing underlying abstractions helpful for evaluating the blend of arbitrary combinations of instruments.
Trained neurons-based motion detection in optical camera communications
NASA Astrophysics Data System (ADS)
Teli, Shivani; Cahyadi, Willy Anugrah; Chung, Yeon Ho
2018-04-01
A concept of trained neurons-based motion detection (TNMD) in optical camera communications (OCC) is proposed. The proposed TNMD is based on neurons present in a neural network that perform repetitive analysis in order to provide efficient and reliable motion detection in OCC. This efficient motion detection can be considered another functionality of OCC in addition to two traditional functionalities of illumination and communication. To verify the proposed TNMD, the experiments were conducted in an indoor static downlink OCC, where a mobile phone front camera is employed as the receiver and an 8 × 8 red, green, and blue (RGB) light-emitting diode array as the transmitter. The motion is detected by observing the user's finger movement in the form of centroid through the OCC link via a camera. Unlike conventional trained neurons approaches, the proposed TNMD is trained not with motion itself but with centroid data samples, thus providing more accurate detection and far less complex detection algorithm. The experiment results demonstrate that the TNMD can detect all considered motions accurately with acceptable bit error rate (BER) performances at a transmission distance of up to 175 cm. In addition, while the TNMD is performed, a maximum data rate of 3.759 kbps over the OCC link is obtained. The OCC with the proposed TNMD combined can be considered an efficient indoor OCC system that provides illumination, communication, and motion detection in a convenient smart home environment.
Raj, A Arockia Bazil; Selvi, J Arputha Vijaya; Kumar, D; Sivakumaran, N
2014-06-10
In free-space optical link (FSOL), atmospheric turbulence causes fluctuations in both intensity and phase of the received beam and impairing link performance. The beam motion is one of the main causes for major power loss. This paper presents an investigation on the performance of two types of controller designed for aiming a laser beam to be at a particular spot under dynamic disturbances. The multiple experiment observability nonlinear input-output data mapping is used as the principal components for controllers design. The first design is based on the Taguchi method while the second is artificial neural network method. These controllers process the beam location information from a static linear map of 2D plane: optoelectronic position detector, as observer, and then generate the necessary outputs to steer the beam with a microelectromechanical mirror: fast steering mirror. The beam centroid is computed using monopulse algorithm. Evidence of suitability and effectiveness of the proposed controllers are comprehensively assessed and quantitatively measured in terms of coefficient of correlation, correction speed, control exactness, centroid displacement, and stability of the receiver signal through the experimental results from the FSO link setup established for the horizontal range of 0.5 km at an altitude of 15.25 m. The test field type is open flat terrain, grass, and few isolated obstacles.
NASA Astrophysics Data System (ADS)
Zhao, Liang; Adhikari, Avishek; Sakurai, Kouichi
Watermarking is one of the most effective techniques for copyright protection and information hiding. It can be applied in many fields of our society. Nowadays, some image scrambling schemes are used as one part of the watermarking algorithm to enhance the security. Therefore, how to select an image scrambling scheme and what kind of the image scrambling scheme may be used for watermarking are the key problems. Evaluation method of the image scrambling schemes can be seen as a useful test tool for showing the property or flaw of the image scrambling method. In this paper, a new scrambling evaluation system based on spatial distribution entropy and centroid difference of bit-plane is presented to obtain the scrambling degree of image scrambling schemes. Our scheme is illustrated and justified through computer simulations. The experimental results show (in Figs. 6 and 7) that for the general gray-scale image, the evaluation degree of the corresponding cipher image for the first 4 significant bit-planes selection is nearly the same as that for the 8 bit-planes selection. That is why, instead of taking 8 bit-planes of a gray-scale image, it is sufficient to take only the first 4 significant bit-planes for the experiment to find the scrambling degree. This 50% reduction in the computational cost makes our scheme efficient.
In Pursuit of LSST Science Requirements: A Comparison of Photometry Algorithms
NASA Astrophysics Data System (ADS)
Becker, Andrew C.; Silvestri, Nicole M.; Owen, Russell E.; Ivezić, Željko; Lupton, Robert H.
2007-12-01
We have developed an end-to-end photometric data-processing pipeline to compare current photometric algorithms commonly used on ground-based imaging data. This test bed is exceedingly adaptable and enables us to perform many research and development tasks, including image subtraction and co-addition, object detection and measurements, the production of photometric catalogs, and the creation and stocking of database tables with time-series information. This testing has been undertaken to evaluate existing photometry algorithms for consideration by a next-generation image-processing pipeline for the Large Synoptic Survey Telescope (LSST). We outline the results of our tests for four packages: the Sloan Digital Sky Survey's Photo package, DAOPHOT and ALLFRAME, DOPHOT, and two versions of Source Extractor (SExtractor). The ability of these algorithms to perform point-source photometry, astrometry, shape measurements, and star-galaxy separation and to measure objects at low signal-to-noise ratio is quantified. We also perform a detailed crowded-field comparison of DAOPHOT and ALLFRAME, and profile the speed and memory requirements in detail for SExtractor. We find that both DAOPHOT and Photo are able to perform aperture photometry to high enough precision to meet LSST's science requirements, and less adequately at PSF-fitting photometry. Photo performs the best at simultaneous point- and extended-source shape and brightness measurements. SExtractor is the fastest algorithm, and recent upgrades in the software yield high-quality centroid and shape measurements with little bias toward faint magnitudes. ALLFRAME yields the best photometric results in crowded fields.
TH-AB-202-03: A Novel Tool for Computing Deliverable Doses in Dynamic MLC Tracking Treatments
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fast, M; Kamerling, C; Menten, M
2016-06-15
Purpose: In tracked dynamic multi-leaf collimator (MLC) treatments, segments are continuously adapted to the target centroid motion in beams-eye-view. On-the-fly segment adaptation, however, potentially induces dosimetric errors due to the finite MLC leaf width and non-rigid target motion. In this study, we outline a novel tool for computing the 4d dose of lung SBRT plans delivered with MLC tracking. Methods: The following automated workflow was developed: A) centroid tracking, where the initial segments are morphed to each 4dCT phase based on the beams-eye-view GTV shift (followed by a dose calculation on each phase); B) re-optimized tracking, in which all morphedmore » initial plans from (A) are further optimised (“warm-started”) in each 4dCT phase using the initial optimisation parameters but phase-specific volume definitions. Finally, both dose sets are accumulated to the reference phase using deformable image registration. Initial plans were generated according to the RTOG-1021 guideline (54Gy, 3-Fx, equidistant 9-beam IMRT) on the peak-exhale (reference) phase of a phase-binned 4dCT. Treatment planning and delivery simulations were performed in RayStation (research v4.6) using our in-house segment-morphing algorithm, which directly links to RayStation through a native C++ interface. Results: Computing the tracking plans and 4d dose distributions via the in-house interface takes 5 and 8 minutes respectively for centroid and re-optimized tracking. For a sample lung SBRT patient with 14mm peak-to-peak motion in sup-inf direction, mainly perpendicular leaf motion (0-collimator) resulted in small dose changes for PTV-D95 (−13cGy) and GTV-D98 (+18cGy) for the centroid tracking case compared to the initial plan. Modest reductions of OAR doses (e.g. spinal cord D2: −11cGy) were achieved in the idealized tracking case. Conclusion: This study presents an automated “1-click” workflow for computing deliverable MLC tracking doses in RayStation. Adding a non-deliverable re-optimized tracking scenario is expected to help quantify plan robustness for more challenging patients with anatomy deformations. We acknowledge support of the MLC tracking research from Elekta AB. MFF is supported by Cancer Research UK under Programme C33589/A19908. Research at ICR is also supported by Cancer Research UK under Programme C33589/A19727 and NHS funding to the NIHR Biomedical Research Centre at RMH and ICR.« less
Delimiting Areas of Endemism through Kernel Interpolation
Oliveira, Ubirajara; Brescovit, Antonio D.; Santos, Adalberto J.
2015-01-01
We propose a new approach for identification of areas of endemism, the Geographical Interpolation of Endemism (GIE), based on kernel spatial interpolation. This method differs from others in being independent of grid cells. This new approach is based on estimating the overlap between the distribution of species through a kernel interpolation of centroids of species distribution and areas of influence defined from the distance between the centroid and the farthest point of occurrence of each species. We used this method to delimit areas of endemism of spiders from Brazil. To assess the effectiveness of GIE, we analyzed the same data using Parsimony Analysis of Endemism and NDM and compared the areas identified through each method. The analyses using GIE identified 101 areas of endemism of spiders in Brazil GIE demonstrated to be effective in identifying areas of endemism in multiple scales, with fuzzy edges and supported by more synendemic species than in the other methods. The areas of endemism identified with GIE were generally congruent with those identified for other taxonomic groups, suggesting that common processes can be responsible for the origin and maintenance of these biogeographic units. PMID:25611971
Messinger, Terence; Paybins, Katherine S.
2014-01-01
Correlation of flows at pairs of streamgages were evaluated using a Spearman’s rho correlation coefficient to better identify gages that can be used as index gages to estimate daily flow at ungaged stream sites in West Virginia. Much of West Virginia (77 percent) is within areas where Spearman’s rho for daily streamflow between streamgages on unregulated streams (unregulated streamgages) is greater than 0.9; most withdrawals from ungaged streams for shale gas well hydraulic fracturing are being made in these areas. Most of West Virginia (>99 percent) is within zones where Spearman’s rho between streamgages on unregulated streams is greater than 0.85. Withdrawals for hydraulic fracturing are made from ungaged streams in areas where Spearman’s rho between streamgages on unregulated streams is less than 0.9, but because spatial correlation is partly a function of the density of the streamgaging network, adding or reactivating several streamgages would be likely to result in correlations of 0.90 or higher in these areas. Seasonal differences in the strength and spatial extent of correlations of daily streamflows are great. The strongest correlations among streamgages are for fall, followed by spring, then winter. One possible explanation for the weak correlations for summer may be that precipitation and runoff associated with convective storms affect one basin and miss nearby basins. A comparison of correlation patterns during previously identified climatic periods shows that the strongest correlations occurred during 1963–69, a period of drought, and the weakest during 1970–79, a wet period. The apparent effect of frequent rain during 1970–79 overshadowed streamgage-network density, which was at its historic maximum in West Virginia at that time, so that the extent of areas with high correlation to at least one streamgage was smaller during 1970–79 than during 1963–69. Correlations for 1992 to 2011 were slightly weaker than those for 1963 to 1969. The relation between correlation and distance between basin centroids was determined to be stronger for streamgage pairs in the Ohio River Basin than for pairs in the Atlantic Slope River Basins, which in turn was stronger than the relation between pairs of streamgages split between the two major basins. Quantile regression equations were developed for these three comparisons to estimate the Spearman’s rho correlation coefficient for streamgage pairs using distance between basin centroids as a predictor variable. The equations can be used for streamgage network planning. For the Ohio River Basin, the distance between basin centroids at which 50 percent of streamgage pairs would exceed a Spearman’s rho of 0.95 is 9 miles. The distance between basin centroids at which 50 percent of streamgage pairs would exceed a Spearman’s rho of 0.90 is 25 miles, and the distance at which 50 percent of streamgage pairs would exceed a Spearman’s rho of 0.85 is 48 miles. For the Atlantic Slope River Basins, the distance between basin centroids at which 50 percent of streamgage pairs would exceed a Spearman’s rho of 0.95 is 1 mile. The distance between basin centroids at which 50 percent of streamgage pairs would exceed a Spearman’s rho of 0.90 is 13 miles, and the distance at which 50 percent of streamgage pairs would exceed a Spearman’s rho of 0.85 is 41 miles. For pairs of streamgages split between the two major basins, the regression equation gives a value of 0.84 for the correlation coefficient at zero miles. On maps of correlations, the shape of strongly correlated areas for streamgages in the Ohio River Basin is generally round. In the Valley and Ridge Physiographic Province, which generally coincides with the Atlantic Slope River Basins within the study area, areas strongly correlated with streamgages generally coincide with major valleys.
NASA Astrophysics Data System (ADS)
Nakano, M.; Kumagai, H.; Inoue, H.
2008-06-01
We propose a method of waveform inversion to rapidly and routinely estimate both the moment function and the centroid moment tensor (CMT) of an earthquake. In this method, waveform inversion is carried out in the frequency domain to obtain the moment function more rapidly than when solved in the time domain. We assume a pure double-couple source mechanism in order to stabilize the solution when using data from a small number of seismic stations. The fault and slip orientations are estimated by a grid search with respect to the strike, dip and rake angles. The moment function in the time domain is obtained from the inverse Fourier transform of the frequency components determined by the inversion. Since observed waveforms used for the inversion are limited in a particular frequency band, the estimated moment function is a bandpassed form. We develop a practical approach to estimate the deconvolved form of the moment function, from which we can reconstruct detailed rupture history and the seismic moment. The source location is determined by a spatial grid search using adaptive grid spacings, which are gradually decreased in each step of the search. We apply this method to two events that occurred in Indonesia by using data from a broad-band seismic network in Indonesia (JISNET): one northeast of Sulawesi (Mw = 7.5) on 2007 January 21, and the other south of Java (Mw = 7.5) on 2006 July 17. The source centroid locations and mechanisms we estimated for both events are consistent with those determined by the Global CMT Project and the National Earthquake Information Center of the U.S. Geological Survey. The estimated rupture duration of the Sulawesi event is 16 s, which is comparable to a typical duration for earthquakes of this magnitude, while that of the Java event is anomalously long (176 s), suggesting that this event was a tsunami earthquake. Our application demonstrates that this inversion method has great potential for rapid and routine estimations of both the CMT and the moment function, and may be useful for identification of tsunami earthquakes.
R2 TRI facilities with 1999-2011 risk related estimates throughout the census blockgroup
This dataset delineates the distribution of estimate risk from the TRI facilities for 1999 - 2011 throughout the census blockgroup of the region using Office of Pollution, Prevention & Toxics (OPPT)'s Risk-Screening Environmental Indicators model (RSEI). The model uses the reported quantities of TRI releases of chemicals to estimate the impacts associated with each type of air release or transfer by every TRI facility.The RSEI was run to generate the estimate risk for each TRI facility in the region. The result from the model is joined to the TRI spatial data. Estimate risk values for each census block group were calculated based on the inverse distance of all the facilities which are within a 50 km radius of the census block group centroid. The estimate risk value for each census block group thus is an aggregated value that takes into account the estimate potential risk of all the facilities within the searching radius (50km).
NASA Astrophysics Data System (ADS)
Ellis, Andria P.; DeMets, Charles; Briole, Pierre; Molina, Enrique; Flores, Omar; Rivera, Jeffrey; Lasserre, Cécile; Lyon-Caen, Hélène; Lord, Neal
2015-05-01
As the first large subduction thrust earthquake off the coast of western Guatemala in the past several decades, the 2012 November 7 Mw = 7.4 earthquake offers the first opportunity to study coseismic and postseismic behaviour along a segment of the Middle America trench where frictional coupling makes a transition from weak coupling off the coast of El Salvador to strong coupling in southern Mexico. We use measurements at 19 continuous GPS sites in Guatemala, El Salvador and Mexico to estimate the coseismic slip and postseismic deformation of the November 2012 Champerico (Guatemala) earthquake. An inversion of the coseismic offsets, which range up to ˜47 mm at the surface near the epicentre, indicates that up to ˜2 m of coseismic slip occurred on a ˜30 × 30 km rupture area between ˜10 and 30 km depth, which is near the global CMT centroid. The geodetic moment of 13 × 1019 N m and corresponding magnitude of 7.4 both agree well with independent seismological estimates. Transient postseismic deformation that was recorded at 11 GPS sites is attributable to a combination of fault afterslip and viscoelastic flow in the lower crust and/or mantle. Modelling of the viscoelastic deformation suggests that it constituted no more than ˜30 per cent of the short-term postseismic deformation. GPS observations that extend six months after the earthquake are well fit by a model in which most afterslip occurred at the same depth or directly downdip from the rupture zone and released energy equivalent to no more than ˜20 per cent of the coseismic moment. An independent seismological slip solution that features more highly concentrated coseismic slip than our own fits the GPS offsets well if its slip centroid is translated ˜50 km to the west to a position close to our slip centroid. The geodetic and seismologic slip solutions thus suggest bounds of 2-7 m for the peak slip along a region of the interface no larger than 30 × 30 km.
Generic Design Procedures for the Repair of Acoustically Damaged Panels
2008-12-01
plate for component 1 h2 Thickness of plate for component 2 h3 Thickness of plate for component 3 h13 Distance from centroid of component 1 to centroid...E1 View AA Simply supported/clamped plate h13 Ly Lx y x d3 d1 y 2a Figure 4: Geometry for constrained layer damping of a simply...dimensions, properties and parameters Physical dimensions (Figure 4) Material properties Key parameters h1, h2 , h3 , h13 , Lx , Ly , 2a E1 , E3 , G2
Shaffer, Franklin D.
2013-03-12
The application relates to particle trajectory recognition from a Centroid Population comprised of Centroids having an (x, y, t) or (x, y, f) coordinate. The method is applicable to visualization and measurement of particle flow fields of high particle. In one embodiment, the centroids are generated from particle images recorded on camera frames. The application encompasses digital computer systems and distribution mediums implementing the method disclosed and is particularly applicable to recognizing trajectories of particles in particle flows of high particle concentration. The method accomplishes trajectory recognition by forming Candidate Trajectory Trees and repeated searches at varying Search Velocities, such that initial search areas are set to a minimum size in order to recognize only the slowest, least accelerating particles which produce higher local concentrations. When a trajectory is recognized, the centroids in that trajectory are removed from consideration in future searches.
NASA Astrophysics Data System (ADS)
Dotani, T.
1989-11-01
Strong Quasi-Periodic Oscillations (QPO) in type 2 bursts from the rapid burster with Ginga were detected. The QPD have centroid frequency of approximately 5 and 2 Hz during bursts which lasted for approximately 10 and 30 sec, respectively. The QPO observations were analyzed and the following results were obtained: QPO centroid frequencies have some correlation with burst duration and peak count rate, however the correlations are complicated and the burst parameters do not uniquely determine the QPO centroid frequency; the appearance of the QPO is closely related to the so-called timescale-invariant profile of the bursts; the QPO are significant only in the even numbered peaks of the profile and not in the odd numbered peaks; in most cases the QPO centroid frequency decreases up to approximately 25 percent during a burst. The energy spectra at the QPO peaks and valleys were investigated and the QPO peaks were found to have significantely higher blackbody temperature than the QPD valleys.
Surface-Wave Relocation of Remote Continental Earthquakes
NASA Astrophysics Data System (ADS)
Kintner, J. A.; Ammon, C. J.; Cleveland, M.
2017-12-01
Accurate hypocenter locations are essential for seismic event analysis. Single-event location estimation methods provide relatively imprecise results in remote regions with few nearby seismic stations. Previous work has demonstrated that improved relative epicentroid precision in oceanic environments is obtainable using surface-wave cross correlation measurements. We use intermediate-period regional and teleseismic Rayleigh and Love waves to estimate relative epicentroid locations of moderately-sized seismic events in regions around Iran. Variations in faulting geometry, depth, and intermediate-period dispersion make surface-wave based event relocation challenging across this broad continental region. We compare and integrate surface-wave based relative locations with InSAR centroid location estimates. However, mapping an earthquake sequence mainshock to an InSAR fault deformation model centroid is not always a simple process, since the InSAR observations are sensitive to post-seismic deformation. We explore these ideas using earthquake sequences in western Iran. We also apply surface-wave relocation to smaller magnitude earthquakes (3.5 < M < 5.0). Inclusion of smaller-magnitude seismic events in a relocation effort requires a shift in bandwidth to shorter periods, which increases the sensitivity of relocations to surface-wave dispersion. Frequency-domain inter-event phase observations are used to understand the time-domain cross-correlation information, and to choose the appropriate band for applications using shorter periods. Over short inter-event distances, the changing group velocity does not strongly degrade the relative locations. For small-magnitude seismic events in continental regions, surface-wave relocation does not appear simple enough to allow broad routine application, but using this method to analyze individual earthquake sequences can provide valuable insight into earthquake and faulting processes.
Epidemiology from Tweets: Estimating Misuse of Prescription Opioids in the USA from Social Media.
Chary, Michael; Genes, Nicholas; Giraud-Carrier, Christophe; Hanson, Carl; Nelson, Lewis S; Manini, Alex F
2017-12-01
The misuse of prescription opioids (MUPO) is a leading public health concern. Social media are playing an expanded role in public health research, but there are few methods for estimating established epidemiological metrics from social media. The purpose of this study was to demonstrate that the geographic variation of social media posts mentioning prescription opioid misuse strongly correlates with government estimates of MUPO in the last month. We wrote software to acquire publicly available tweets from Twitter from 2012 to 2014 that contained at least one keyword related to prescription opioid use (n = 3,611,528). A medical toxicologist and emergency physician curated the list of keywords. We used the semantic distance (SemD) to automatically quantify the similarity of meaning between tweets and identify tweets that mentioned MUPO. We defined the SemD between two words as the shortest distance between the two corresponding word-centroids. Each word-centroid represented all recognized meanings of a word. We validated this automatic identification with manual curation. We used Twitter metadata to estimate the location of each tweet. We compared our estimated geographic distribution with the 2013-2015 National Surveys on Drug Usage and Health (NSDUH). Tweets that mentioned MUPO formed a distinct cluster far away from semantically unrelated tweets. The state-by-state correlation between Twitter and NSDUH was highly significant across all NSDUH survey years. The correlation was strongest between Twitter and NSDUH data from those aged 18-25 (r = 0.94, p < 0.01 for 2012; r = 0.94, p < 0.01 for 2013; r = 0.71, p = 0.02 for 2014). The correlation was driven by discussions of opioid use, even after controlling for geographic variation in Twitter usage. Mentions of MUPO on Twitter correlate strongly with state-by-state NSDUH estimates of MUPO. We have also demonstrated that a natural language processing can be used to analyze social media to provide insights for syndromic toxicosurveillance.
The 2017 Mw = 8.2 Tehuantepec earthquake: a slab bending or slab pull rupture?
NASA Astrophysics Data System (ADS)
Duputel, Z.; Gombert, B.; Simons, M.; Fielding, E. J.; Rivera, L. A.; Bekaert, D. P.; Jiang, J.; Liang, C.; Moore, A. W.; Liu, Z.
2017-12-01
On September 8th 2017, a regionally destructive Mw 8.2 intra-slab earthquake struck Mexico in the Gulf of Tehuantepec. While large intermediate depth intra-slab earthquakes are a major hazard, we have only a limited knowledge of the strain budgets within subducting slabs. Several mechanisms have been proposed to explain intraplate earthquakes in subduction zones. Bending stresses might cause the occurrence of seismic events located at depths where the slab dip changes abruptly. However, an alternative explanation is needed if the ruptures are found to propagate through the entire lithosphere. Depending on the coupling of the subduction interface, intraplate earthquakes occurring updip or downdip of the locked zone could also be caused by the negative buoyancy of the sinking slab (i.e., slab pull). The increasing availability of near-fault data provides a unique opportunity to better constrain the seismogenic behavior of large intra-slab earthquakes. Teleseismic analyses of the 2017 Tehuantepec earthquake lead to contrasting statements about the depth extent of the rupture: while most of long period centroid moment tensor inversions yield fairly large centroid depths (>40 km), some finite-fault models suggest much shallower slip concentrated at depths less than 30 km. In this study, we analyze GPS, InSAR, tsunami and seismological data to constrain the earthquake location, fault geometry and slip distribution. We use a Bayesian approach devoid of significant spatial smoothing to characterize the range of allowable rupture depths. In addition, to cope with potential artifacts in centroid depth estimates due to unmodeled lateral heterogeneities, we also analyze long-period seismological data using a full 3D Earth model. Preliminary results suggest a fairly deep rupture consistent with a slab-pull process breaking a significant proportion of the lithosphere and potentially reflecting at least local detachment of the slab.
NASA Astrophysics Data System (ADS)
Gomez-Gonzalez, J. M.; Mellors, R.
2007-05-01
We investigate the kinematics of the rupture process for the September 27, 2003, Mw7.3, Altai earthquake and its associated large aftershocks. This is the largest earthquake striking the Altai mountains within the last 50 years, which provides important constraints on the ongoing tectonics. The fault plane solution obtained by teleseismic body waveform modeling indicated a predominantly strike-slip event (strike=130, dip=75, rake 170), Scalar moment for the main shock ranges from 0.688 to 1.196E+20 N m, a source duration of about 20 to 42 s, and an average centroid depth of 10 km. Source duration would indicate a fault length of about 130 - 270 km. The main shock was followed closely by two aftershocks (Mw5.7, Mw6.4) occurred the same day, another aftershock (Mw6.7) occurred on 1 October , 2003. We also modeled the second aftershock (Mw6.4) to asses geometric similarities during their respective rupture process. This aftershock occurred spatially very close to the mainshock and possesses a similar fault plane solution (strike=128, dip=71, rake=154), and centroid depth (13 km). Several local conditions, such as the crustal model and fault geometry, affect the correct estimation of some source parameters. We perfume a sensitivity evaluation of several parameters, including centroid depth, scalar moment and source duration, based on a point and finite source modeling. The point source approximation results are the departure parameters for the finite source exploration. We evaluate the different reported parameters to discard poor constrained models. In addition, deformation data acquired by InSAR are also included in the analysis.
Adjoint tomography and centroid-moment tensor inversion of the Kanto region, Japan
NASA Astrophysics Data System (ADS)
Miyoshi, T.
2017-12-01
A three-dimensional seismic wave speed model in the Kanto region of Japan was developed using adjoint tomography based on large computing. Starting with a model based on previous travel time tomographic results, we inverted the waveforms obtained at seismic broadband stations from 140 local earthquakes in the Kanto region to obtain the P- and S-wave speeds Vp and Vs. The synthetic displacements were calculated using the spectral element method (SEM; e.g. Komatitsch and Tromp 1999; Peter et al. 2011) in which the Kanto region was parameterized using 16 million grid points. The model parameters Vp and Vs were updated iteratively by Newton's method using the misfit and Hessian kernels until the misfit between the observed and synthetic waveforms was minimized. The proposed model reveals several anomalous areas with extremely low Vs values in comparison with those of the initial model. The synthetic waveforms obtained using the newly proposed model for the selected earthquakes show better fit than the initial model to the observed waveforms in different period ranges within 5-30 s. In the present study, all centroid times of the source solutions were determined using time shifts based on cross correlation to prevent high computing resources before the structural inversion. Additionally, parameters of centroid-moment solutions were fully determined using the SEM assuming the 3D structure (e.g. Liu et al. 2004). As a preliminary result, new solutions were basically same as their initial solutions. This may indicate that the 3D structure is not effective for the source estimation. Acknowledgements: This study was supported by JSPS KAKENHI Grant Number 16K21699.
A Routing Protocol for Multisink Wireless Sensor Networks in Underground Coalmine Tunnels
Xia, Xu; Chen, Zhigang; Liu, Hui; Wang, Huihui; Zeng, Feng
2016-01-01
Traditional underground coalmine monitoring systems are mainly based on the use of wired transmission. However, when cables are damaged during an accident, it is difficult to obtain relevant data on environmental parameters and the emergency situation underground. To address this problem, the use of wireless sensor networks (WSNs) has been proposed. However, the shape of coalmine tunnels is not conducive to the deployment of WSNs as they are long and narrow. Therefore, issues with the network arise, such as extremely large energy consumption, very weak connectivity, long time delays, and a short lifetime. To solve these problems, in this study, a new routing protocol algorithm for multisink WSNs based on transmission power control is proposed. First, a transmission power control algorithm is used to negotiate the optimal communication radius and transmission power of each sink. Second, the non-uniform clustering idea is adopted to optimize the cluster head selection. Simulation results are subsequently compared to the Centroid of the Nodes in a Partition (CNP) strategy and show that the new algorithm delivers a good performance: power efficiency is increased by approximately 70%, connectivity is increased by approximately 15%, the cluster interference is diminished by approximately 50%, the network lifetime is increased by approximately 6%, and the delay is reduced with an increase in the number of sinks. PMID:27916917
A Routing Protocol for Multisink Wireless Sensor Networks in Underground Coalmine Tunnels.
Xia, Xu; Chen, Zhigang; Liu, Hui; Wang, Huihui; Zeng, Feng
2016-11-30
Traditional underground coalmine monitoring systems are mainly based on the use of wired transmission. However, when cables are damaged during an accident, it is difficult to obtain relevant data on environmental parameters and the emergency situation underground. To address this problem, the use of wireless sensor networks (WSNs) has been proposed. However, the shape of coalmine tunnels is not conducive to the deployment of WSNs as they are long and narrow. Therefore, issues with the network arise, such as extremely large energy consumption, very weak connectivity, long time delays, and a short lifetime. To solve these problems, in this study, a new routing protocol algorithm for multisink WSNs based on transmission power control is proposed. First, a transmission power control algorithm is used to negotiate the optimal communication radius and transmission power of each sink. Second, the non-uniform clustering idea is adopted to optimize the cluster head selection. Simulation results are subsequently compared to the Centroid of the Nodes in a Partition (CNP) strategy and show that the new algorithm delivers a good performance: power efficiency is increased by approximately 70%, connectivity is increased by approximately 15%, the cluster interference is diminished by approximately 50%, the network lifetime is increased by approximately 6%, and the delay is reduced with an increase in the number of sinks.
Novel tip-tilt sensing strategies for the laser tomography adaptive optics system of the GMT
NASA Astrophysics Data System (ADS)
van Dam, Marcos A.; Bouchez, Antonin H.; Conan, Rodolphe
2016-07-01
We investigate the tip-tilt sensor for the laser tomography adaptive optics system of the Giant Magellan Telescope. In the case of the GMTIFS instrument, we require high Strehl over a moderate region of the sky and high throughput with very high sky coverage. In this paper, we simulate the performance of a K-band tip-tilt sensor using an eAPD array. The paper presents a comparison of different centroiding techniques and servo controllers. In addition, we explore the possibility of using the wavefront sensors (WFSs) used in the ground layer adaptive optics (GLAO) mode to supplement the tip-tilt sensor measurement. The imaging requirement is almost met using the correlation algorithm to estimate the displacement of the spot, along with a high-order controller tailored to the telescope wind shake. This requires a sufficiently bright star to be able to run at 500 Hz, so the sky coverage is limited. In the absence of wind, then the star can be fainter and the requirement is met. The spectroscopy requirement is met even in the case of high wind. The results are even better if we use the GLAO WFSs as well as the tip-tilt sensors. Further work will explore the viability of inserting a DM in the OIWFS and the resulting tip-tilt performance.
NASA Astrophysics Data System (ADS)
Ying, Changsheng; Zhao, Peng; Li, Ye
2018-01-01
The intensified charge-coupled device (ICCD) is widely used in the field of low-light-level (LLL) imaging. The LLL images captured by ICCD suffer from low spatial resolution and contrast, and the target details can hardly be recognized. Super-resolution (SR) reconstruction of LLL images captured by ICCDs is a challenging issue. The dispersion in the double-proximity-focused image intensifier is the main factor that leads to a reduction in image resolution and contrast. We divide the integration time into subintervals that are short enough to get photon images, so the overlapping effect and overstacking effect of dispersion can be eliminated. We propose an SR reconstruction algorithm based on iterative projection photon localization. In the iterative process, the photon image is sliced by projection planes, and photons are screened under the constraints of regularity. The accurate position information of the incident photons in the reconstructed SR image is obtained by the weighted centroids calculation. The experimental results show that the spatial resolution and contrast of our SR image are significantly improved.
Rack Insertion End Effector (RIEE) guidance
NASA Technical Reports Server (NTRS)
Malladi, Narasimha S.
1994-01-01
NASA-KSC has developed a mechanism to handle and insert Racks into the Space Station Logistic Modules. This mechanism consists of a Base with 3 motorized degrees of freedom, a 3 section motorized Boom that goes from 15 to 44 feet in length, and a Rack Insertion End Effector (RIEE) with 5 hand wheels for precise alignment. During the 1993 NASA-ASEE Summer Faculty Fellowship Program at KSC, I designed an Active Vision (Camera) Arrangement and developed an algorithm to determine (1) the displacements required by the Room for its initial positioning and (2) the rotations required at the five hand-wheels of the RIEE, for the insertion of the Rack, using the centroids fo the Camera Images of the Location Targets in the Logistic Module. Presently, during the summer of '94, I completed the preliminary design of an easily portable measuring instrument using encoders to obtain the 3-Dimensional Coordinates of Location Targets in the Logistics Module relative to the RIEE mechanism frame. The algorithm developed in '93 can use the output of this instrument also. Simplification of the '93 work and suggestions for the future work are discussed.
Intraoperative cyclorotation and pupil centroid shift during LASIK and PRK.
Narváez, Julio; Brucks, Matthew; Zimmerman, Grenith; Bekendam, Peter; Bacon, Gregory; Schmid, Kristin
2012-05-01
To determine the degree of cyclorotation and centroid shift in the x and y axis that occurs intraoperatively during LASIK and photorefractive keratectomy (PRK). Intraoperative cyclorotation and centroid shift were measured in 63 eyes from 34 patients with a mean age of 34 years (range: 20 to 56 years) undergoing either LASIK or PRK. Preoperatively, an iris image of each eye was obtained with the VISX WaveScan Wavefront System (Abbott Medical Optics Inc) with iris registration. A VISX Star S4 (Abbott Medical Optics Inc) laser was later used to measure cyclotorsion and pupil centroid shift at the beginning of the refractive procedure and after flap creation or epithelial removal. The mean change in intraoperative cyclorotation was 1.48±1.11° in LASIK eyes and 2.02±2.63° in PRK eyes. Cyclorotation direction changed by >2° in 21% of eyes after flap creation in LASIK and in 32% of eyes after epithelial removal in PRK. The respective mean intraoperative shift in the x axis and y axis was 0.13±0.15 mm and 0.17±0.14 mm, respectively, in LASIK eyes, and 0.09±0.07 mm and 0.10±0.13 mm, respectively, in PRK eyes. Intraoperative centroid shifts >100 μm in either the x axis or y axis occurred in 71% of LASIK eyes and 55% of PRK eyes. Significant changes in cyclotorsion and centroid shifts were noted prior to surgery as well as intraoperatively with both LASIK and PRK. It may be advantageous to engage iris registration immediately prior to ablation to provide a reference point representative of eye position at the initiation of laser delivery. Copyright 2012, SLACK Incorporated.
Bashar, Md Khayrul; Komatsu, Koji; Fujimori, Toshihiko; Kobayashi, Tetsuya J
2012-01-01
Accurate identification of cell nuclei and their tracking using three dimensional (3D) microscopic images is a demanding task in many biological studies. Manual identification of nuclei centroids from images is an error-prone task, sometimes impossible to accomplish due to low contrast and the presence of noise. Nonetheless, only a few methods are available for 3D bioimaging applications, which sharply contrast with 2D analysis, where many methods already exist. In addition, most methods essentially adopt segmentation for which a reliable solution is still unknown, especially for 3D bio-images having juxtaposed cells. In this work, we propose a new method that can directly extract nuclei centroids from fluorescence microscopy images. This method involves three steps: (i) Pre-processing, (ii) Local enhancement, and (iii) Centroid extraction. The first step includes two variations: first variation (Variant-1) uses the whole 3D pre-processed image, whereas the second one (Variant-2) modifies the preprocessed image to the candidate regions or the candidate hybrid image for further processing. At the second step, a multiscale cube filtering is employed in order to locally enhance the pre-processed image. Centroid extraction in the third step consists of three stages. In Stage-1, we compute a local characteristic ratio at every voxel and extract local maxima regions as candidate centroids using a ratio threshold. Stage-2 processing removes spurious centroids from Stage-1 results by analyzing shapes of intensity profiles from the enhanced image. An iterative procedure based on the nearest neighborhood principle is then proposed to combine if there are fragmented nuclei. Both qualitative and quantitative analyses on a set of 100 images of 3D mouse embryo are performed. Investigations reveal a promising achievement of the technique presented in terms of average sensitivity and precision (i.e., 88.04% and 91.30% for Variant-1; 86.19% and 95.00% for Variant-2), when compared with an existing method (86.06% and 90.11%), originally developed for analyzing C. elegans images.
Precise Relative Earthquake Magnitudes from Cross Correlation
Cleveland, K. Michael; Ammon, Charles J.
2015-04-21
We present a method to estimate precise relative magnitudes using cross correlation of seismic waveforms. Our method incorporates the intercorrelation of all events in a group of earthquakes, as opposed to individual event pairings relative to a reference event. This method works well when a reliable reference event does not exist. We illustrate the method using vertical strike-slip earthquakes located in the northeast Pacific and Panama fracture zone regions. Our results are generally consistent with the Global Centroid Moment Tensor catalog, which we use to establish a baseline for the relative event sizes.
NASA Astrophysics Data System (ADS)
Chatterjee, Subhamoy; Hegde, Manjunath; Banerjee, Dipankar; Ravindra, B.
2017-11-01
The century long (1914-2007) {{{H}}}α (656.28 nm) spectroheliograms from the Kodaikanal Solar Observatory (KSO) have been recently digitized. Using these newly calibrated, processed images we study the evolution of dark elongated on-disk structures called filaments, which are potential representatives of magnetic activities on the Sun. To our knowledge, this is the oldest uniform digitized data set with daily images available today in {{{H}}}α . We generate Carrington maps for the entire time duration and try to find the correlations with maps of the same Carrington rotation from the Ca II K KSO data. Filaments are segmented from the Carrington maps using a semi-automated technique and are studied individually to extract their centroids and tilts. We plot the time-latitude distribution of the filament centroids, producing a butterfly diagram which clearly shows the presence of poleward migration. We separate polar filaments for each cycle and try to estimate the delay between the polar filament number cycle and the sunspot number cycle peaks. We correlate this delay with the delay between polar reversal and sunspot number maxima. This provides new insight on the role of polar filaments on polar reversal.
NASA Astrophysics Data System (ADS)
Madsen, P. T.; Kerr, I.; Payne, R.
2004-10-01
Pods of the little known pygmy killer whale (Feresa attenuata) in the northern Indian Ocean were recorded with a vertical hydrophone array connected to a digital recorder sampling at 320 kHz. Recorded clicks were directional, short (25 μs) transients with estimated source levels between 197 and 223 dB re. 1 μPa (pp). Spectra of clicks recorded close to or on the acoustic axis were bimodal with peak frequencies between 45 and 117 kHz, and with centroid frequencies between 70 and 85 kHz. The clicks share characteristics of echolocation clicks from similar sized, whistling delphinids, and have properties suited for the detection and classification of prey targeted by this odontocete. .
Finger vein identification using fuzzy-based k-nearest centroid neighbor classifier
NASA Astrophysics Data System (ADS)
Rosdi, Bakhtiar Affendi; Jaafar, Haryati; Ramli, Dzati Athiar
2015-02-01
In this paper, a new approach for personal identification using finger vein image is presented. Finger vein is an emerging type of biometrics that attracts attention of researchers in biometrics area. As compared to other biometric traits such as face, fingerprint and iris, finger vein is more secured and hard to counterfeit since the features are inside the human body. So far, most of the researchers focus on how to extract robust features from the captured vein images. Not much research was conducted on the classification of the extracted features. In this paper, a new classifier called fuzzy-based k-nearest centroid neighbor (FkNCN) is applied to classify the finger vein image. The proposed FkNCN employs a surrounding rule to obtain the k-nearest centroid neighbors based on the spatial distributions of the training images and their distance to the test image. Then, the fuzzy membership function is utilized to assign the test image to the class which is frequently represented by the k-nearest centroid neighbors. Experimental evaluation using our own database which was collected from 492 fingers shows that the proposed FkNCN has better performance than the k-nearest neighbor, k-nearest-centroid neighbor and fuzzy-based-k-nearest neighbor classifiers. This shows that the proposed classifier is able to identify the finger vein image effectively.
ERIC Educational Resources Information Center
Ferrarello, Daniela; Mammana, Maria Flavia; Pennisi, Mario
2018-01-01
In this paper, we show some properties of centroids of geometric figures, such as triangles, quadrilaterals and tetrahedra. In particular, we will prove the properties by means of geometric transformations and by introducing extensions of triangles and quadrilaterals, i.e. by adding one, two or three new vertices to the figure. The study of these…
Centroid and Theoretical Rotation: Justification for Their Use in Q Methodology Research
ERIC Educational Resources Information Center
Ramlo, Sue
2016-01-01
This manuscript's purpose is to introduce Q as a methodology before providing clarification about the preferred factor analytical choices of centroid and theoretical (hand) rotation. Stephenson, the creator of Q, designated that only these choices allowed for scientific exploration of subjectivity while not violating assumptions associated with…
SU-F-T-258: Efficacy of Exit Fluence-Based Dose Calculation for Prostate Radiation Therapy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Siebers, J; Gardner, J; Neal, B
Purpose: To investigate the efficacy of exit-fluence-based dose computation for prostate radiotherapy by determining if it estimates true dose more accurately than the original planning dose. Methods: Virtual exit-fluencebased dose computation was performed for 19 patients, each with 9–12 repeat CT images. For each patient, a 78 Gy treatment plan was created utilizing 5 mm CTV-to-PTV and OAR-to-PRV margins. A Monte Carlo framework was used to compute dose and exit-fluence images for the planning image and for each repeat CT image based on boney-anatomyaligned and prostate-centroid-aligned CTs. Identical source particles were used for the MC dose-computations on the planning andmore » repeat CTs to maximize correlation. The exit-fluence-based dose and image were computed by multiplying source particle weights by FC(x,y)=FP(x,y)/FT(x,y), where (x,y) are the source particle coordinates projected to the exit-fluence plane and we denote the dose/fluence from the plan by (DP,FP), from the repeat-CT as (DT,FT), and the exit-fluence computation by (DFC,FFC). DFC mimics exit-fluence backprojection through the planning image as FT=FFC. Dose estimates were intercompared to judge the efficacy of exit-fluence-based dose computation. Results: Boney- and prostate-centroid aligned results are combined as there is no statistical difference between them, yielding 420 dose comparisons per dose-volume metric. DFC is more accurate than DP for 46%, 33%, and 44% of cases in estimating CTV D98, D50, and D2 respectively. DFC improved rectum D50 and D2 estimates 54% and 49% respectively and bladder D50 and D2 47 and 49% respectively. While averaged over all patients and images DFC and DP were within 3.1% of DT, they differed from DT by as much as 22% for GTV D98, 71% for the Bladder D50, 17% for Bladder D2, 19% for Rectum D2. Conclusion: Exit-fluence based dose computations infrequently improve CTV or OAR dose estimates and should be used with caution. Research supported in part by Varian Medical Systems.« less
Novel angle estimation for bistatic MIMO radar using an improved MUSIC
NASA Astrophysics Data System (ADS)
Li, Jianfeng; Zhang, Xiaofei; Chen, Han
2014-09-01
In this article, we study the problem of angle estimation for bistatic multiple-input multiple-output (MIMO) radar and propose an improved multiple signal classification (MUSIC) algorithm for joint direction of departure (DOD) and direction of arrival (DOA) estimation. The proposed algorithm obtains initial estimations of angles obtained from the signal subspace and uses the local one-dimensional peak searches to achieve the joint estimations of DOD and DOA. The angle estimation performance of the proposed algorithm is better than that of estimation of signal parameters via rotational invariance techniques (ESPRIT) algorithm, and is almost the same as that of two-dimensional MUSIC. Furthermore, the proposed algorithm can be suitable for irregular array geometry, obtain automatically paired DOD and DOA estimations, and avoid two-dimensional peak searching. The simulation results verify the effectiveness and improvement of the algorithm.
NASA Astrophysics Data System (ADS)
Raghib, Michael; Levin, Simon; Kevrekidis, Ioannis
2010-05-01
Self-propelled particle models (SPP's) are a class of agent-based simulations that have been successfully used to explore questions related to various flavors of collective motion, including flocking, swarming, and milling. These models typically consist of particle configurations, where each particle moves with constant speed, but changes its orientation in response to local averages of the positions and orientations of its neighbors found within some interaction region. These local averages are based on `social interactions', which include avoidance of collisions, attraction, and polarization, that are designed to generate configurations that move as a single object. Errors made by the individuals in the estimates of the state of the local configuration are modeled as a random rotation of the updated orientation resulting from the social rules. More recently, SPP's have been introduced in the context of collective decision-making, where the main innovation consists of dividing the population into naïve and `informed' individuals. Whereas naïve individuals follow the classical collective motion rules, members of the informed sub-population update their orientations according to a weighted average of the social rules and a fixed `preferred' direction, shared by all the informed individuals. Collective decision-making is then understood in terms of the ability of the informed sub-population to steer the whole group along the preferred direction. Summary statistics of collective decision-making are defined in terms of the stochastic properties of the random walk followed by the centroid of the configuration as the particles move about, in particular the scaling behavior of the mean squared displacement (msd). For the region of parameters where the group remains coherent , we note that there are two characteristic time scales, first there is an anomalous transient shared by both purely naïve and informed configurations, i.e. the scaling exponent lies between 1 and 2. The long-time behavior of the msd of the centroid walk scales linearly with time for naïve groups (diffusion), but shows a sharp transition to quadratic scaling (advection) for informed ones. These observations suggest that the mesoscopic variables of interest are the magnitude of the drift, the diffusion coefficient and the time-scales at which the anomalous and the asymptotic behavior respectively dominate transport, the latter being linked to the time scale at which the group reaches a decision. In order to estimate these summary statistics from the msd, we assumed that the configuration centroid follows an uncoupled Continuous Time Random Walk (CTRW) with smooth jump and waiting time pdf's. The mesoscopic transport equation for this type of random walk corresponds to an Advection-Diffusion Equation with Memory (ADEM). The introduction of the memory, and thus non-Markovian effects, is necessary in order to correctly account for the two time scales present. Although we were not able to calculate the memory directly from the individual-level rules, we show that it can estimated from a single, relatively short, simulation run using a Mittag-Leffler function as template. With this function it is possible to predict accurately the behavior of the msd, as well as the full pdf for the position of the centroid. The resulting ADEM is self-consistent in the sense that transport parameters estimated from the memory via a Kubo relationship coincide with those estimated from the moments of the jump size pdf of the associated CTRW for a large number of group sizes, proportions of informed individuals, and degrees of bias along the preferred direction. We also discuss the phase diagrams for the transport coefficients estimated from this method, where we notice velocity-precision trade-offs, where precision is a measure of the deviation of realized group orientations with respect to the informed direction. We also note that the time scale to collective decision is invariant with respect to group size, and depends only on the proportion of informed individuals and the strength of the coupling along the informed direction.
NASA Astrophysics Data System (ADS)
Ramanjooloo, Yudish; Tholen, David J.; Fohring, Dora; Claytor, Zach; Hung, Denise
2017-10-01
The asteroid community is moving towards the implementation of a new astrometric reporting format. This new format will finally include of complementary astrometric uncertainties in the reported observations. The availability of uncertainties will allow ephemeris predictions and orbit solutions to be constrained with greater reliability, thereby improving the efficiency of the community's follow-up and recovery efforts.Our current uncertainty model involves our uncertainties in centroiding on the trailed stars and asteroid and the uncertainty due to the astrometric solution. The accuracy of our astrometric measurements are reliant on how well we can minimise the offset between the spatial and temporal centroids of the stars and the asteroid. This offset is currently unmodelled and can be caused by variations in the cloud transparency, the seeing and tracking inconsistencies. The magnitude zero point of the image, which is affected by fluctuating weather conditions and the catalog bias in the photometric magnitudes, can serve as an indicator of the presence and thickness of clouds. Through comparison of the astrometric uncertainties to the orbit solution residuals, it was apparent that a component of the error analysis remained unaccounted for, as a result of cloud coverage and thickness, telescope tracking inconsistencies and variable seeing. This work will attempt to quantify the tracking inconsistency component. We have acquired a rich dataset with the University of Hawaii 2.24 metre telescope (UH-88 inch) that is well positioned to construct an empirical estimate of the tracking inconsistency component. This work is funded by NASA grant NXX13AI64G.
An Adaptive MR-CT Registration Method for MRI-guided Prostate Cancer Radiotherapy
Zhong, Hualiang; Wen, Ning; Gordon, James; Elshaikh, Mohamed A; Movsas, Benjamin; Chetty, Indrin J.
2015-01-01
Magnetic Resonance images (MRI) have superior soft tissue contrast compared with CT images. Therefore, MRI might be a better imaging modality to differentiate the prostate from surrounding normal organs. Methods to accurately register MRI to simulation CT images are essential, as we transition the use of MRI into the routine clinic setting. In this study, we present a finite element method (FEM) to improve the performance of a commercially available, B-spline-based registration algorithm in the prostate region. Specifically, prostate contours were delineated independently on ten MRI and CT images using the Eclipse treatment planning system. Each pair of MRI and CT images was registered with the B-spline-based algorithm implemented in the VelocityAI system. A bounding box that contains the prostate volume in the CT image was selected and partitioned into a tetrahedral mesh. An adaptive finite element method was then developed to adjust the displacement vector fields (DVFs) of the B-spline-based registrations within the box. The B-spline and FEM-based registrations were evaluated based on the variations of prostate volume and tumor centroid, the unbalanced energy of the generated DVFs, and the clarity of the reconstructed anatomical structures. The results showed that the volumes of the prostate contours warped with the B-spline-based DVFs changed 10.2% on average, relative to the volumes of the prostate contours on the original MR images. This discrepancy was reduced to 1.5% for the FEM-based DVFs. The average unbalanced energy was 2.65 and 0.38 mJ/cm3, and the prostate centroid deviation was 0.37 and 0.28 cm, for the B-spline and FEM-based registrations, respectively. Different from the B-spline-warped MR images, the FEM-warped MR images have clear boundaries between prostates and bladders, and their internal prostatic structures are consistent with those of the original MR images. In summary, the developed adaptive FEM method preserves the prostate volume during the transformation between the MR and CT images and improves the accuracy of the B-spline registrations in the prostate region. The approach will be valuable for development of high-quality MRI-guided radiation therapy. PMID:25775937
An adaptive MR-CT registration method for MRI-guided prostate cancer radiotherapy
NASA Astrophysics Data System (ADS)
Zhong, Hualiang; Wen, Ning; Gordon, James J.; Elshaikh, Mohamed A.; Movsas, Benjamin; Chetty, Indrin J.
2015-04-01
Magnetic Resonance images (MRI) have superior soft tissue contrast compared with CT images. Therefore, MRI might be a better imaging modality to differentiate the prostate from surrounding normal organs. Methods to accurately register MRI to simulation CT images are essential, as we transition the use of MRI into the routine clinic setting. In this study, we present a finite element method (FEM) to improve the performance of a commercially available, B-spline-based registration algorithm in the prostate region. Specifically, prostate contours were delineated independently on ten MRI and CT images using the Eclipse treatment planning system. Each pair of MRI and CT images was registered with the B-spline-based algorithm implemented in the VelocityAI system. A bounding box that contains the prostate volume in the CT image was selected and partitioned into a tetrahedral mesh. An adaptive finite element method was then developed to adjust the displacement vector fields (DVFs) of the B-spline-based registrations within the box. The B-spline and FEM-based registrations were evaluated based on the variations of prostate volume and tumor centroid, the unbalanced energy of the generated DVFs, and the clarity of the reconstructed anatomical structures. The results showed that the volumes of the prostate contours warped with the B-spline-based DVFs changed 10.2% on average, relative to the volumes of the prostate contours on the original MR images. This discrepancy was reduced to 1.5% for the FEM-based DVFs. The average unbalanced energy was 2.65 and 0.38 mJ cm-3, and the prostate centroid deviation was 0.37 and 0.28 cm, for the B-spline and FEM-based registrations, respectively. Different from the B-spline-warped MR images, the FEM-warped MR images have clear boundaries between prostates and bladders, and their internal prostatic structures are consistent with those of the original MR images. In summary, the developed adaptive FEM method preserves the prostate volume during the transformation between the MR and CT images and improves the accuracy of the B-spline registrations in the prostate region. The approach will be valuable for the development of high-quality MRI-guided radiation therapy.
NASA Astrophysics Data System (ADS)
Surti, S.; Karp, J. S.
2018-03-01
The advent of silicon photomultipliers (SiPMs) has introduced the possibility of increased detector performance in commercial whole-body PET scanners. The primary advantage of these photodetectors is the ability to couple a single SiPM channel directly to a single pixel of PET scintillator that is typically 4 mm wide (one-to-one coupled detector design). We performed simulation studies to evaluate the impact of three different event positioning algorithms in such detectors: (i) a weighted energy centroid positioning (Anger logic), (ii) identifying the crystal with maximum energy deposition (1st max crystal), and (iii) identifying the crystal with the second highest energy deposition (2nd max crystal). Detector simulations performed with LSO crystals indicate reduced positioning errors when using the 2nd max crystal positioning algorithm. These studies are performed over a range of crystal cross-sections varying from 1 × 1 mm2 to 4 × 4 mm2 as well as crystal thickness of 1 cm to 3 cm. System simulations were performed for a whole-body PET scanner (85 cm ring diameter) with a long axial FOV (70 cm long) and show an improvement in reconstructed spatial resolution for a point source when using the 2nd max crystal positioning algorithm. Finally, we observe a 30-40% gain in contrast recovery coefficient values for 1 and 0.5 cm diameter spheres when using the 2nd max crystal positioning algorithm compared to the 1st max crystal positioning algorithm. These results show that there is an advantage to implementing the 2nd max crystal positioning algorithm in a new generation of PET scanners using one-to-one coupled detector design with lutetium based crystals, including LSO, LYSO or scintillators that have similar density and effective atomic number as LSO.
A comparison of kinematic algorithms to estimate gait events during overground running.
Smith, Laura; Preece, Stephen; Mason, Duncan; Bramah, Christopher
2015-01-01
The gait cycle is frequently divided into two distinct phases, stance and swing, which can be accurately determined from ground reaction force data. In the absence of such data, kinematic algorithms can be used to estimate footstrike and toe-off. The performance of previously published algorithms is not consistent between studies. Furthermore, previous algorithms have not been tested at higher running speeds nor used to estimate ground contact times. Therefore the purpose of this study was to both develop a new, custom-designed, event detection algorithm and compare its performance with four previously tested algorithms at higher running speeds. Kinematic and force data were collected on twenty runners during overground running at 5.6m/s. The five algorithms were then implemented and estimated times for footstrike, toe-off and contact time were compared to ground reaction force data. There were large differences in the performance of each algorithm. The custom-designed algorithm provided the most accurate estimation of footstrike (True Error 1.2 ± 17.1 ms) and contact time (True Error 3.5 ± 18.2 ms). Compared to the other tested algorithms, the custom-designed algorithm provided an accurate estimation of footstrike and toe-off across different footstrike patterns. The custom-designed algorithm provides a simple but effective method to accurately estimate footstrike, toe-off and contact time from kinematic data. Copyright © 2014 Elsevier B.V. All rights reserved.
(E)-2-[(2,4,6-Tri-meth-oxy-benzyl-idene)amino]-phenol.
Kaewmanee, Narissara; Chantrapromma, Suchada; Boonnak, Nawong; Quah, Ching Kheng; Fun, Hoong-Kun
2014-01-01
There are two independent mol-ecules in the asymmetric unit of the title compound, C16H17NO4, with similar conformations but some differences in their bond angles. Each mol-ecule adopts a trans configuration with respect to the methyl-idene C=N bond and is twisted with a dihedral angle between the two substituted benzene rings of 80.52 (7)° in one mol-ecule and 83.53 (7)° in the other. All meth-oxy groups are approximately coplanar with the attached benzene rings, with Cmeth-yl-O-C-C torsion angles ranging from -6.7 (2) to 5.07 (19)°. In the crystal, independent mol-ecules are linked together by O-H⋯N and O-H⋯O hydrogen bonds and a π-π inter-action [centroid-centroid distance of 3.6030 (9) Å], forming a dimer. The dimers are further linked by weak C-H⋯O inter-actions and another π-π inter-action [centroid-centroid distance of 3.9452 (9) Å] into layers lying parallel to the ab plane.
NASA Astrophysics Data System (ADS)
Li, Xinji; Hui, Mei; Zhao, Zhu; Liu, Ming; Dong, Liquan; Kong, Lingqin; Zhao, Yuejin
2018-05-01
A differential computation method is presented to improve the precision of calibration for coaxial reverse Hartmann test (RHT). In the calibration, the accuracy of the distance measurement greatly influences the surface shape test, as demonstrated in the mathematical analyses. However, high-precision absolute distance measurement is difficult in the calibration. Thus, a differential computation method that only requires the relative distance was developed. In the proposed method, a liquid crystal display screen successively displayed two regular dot matrix patterns with different dot spacing. In a special case, images on the detector exhibited similar centroid distributions during the reflector translation. Thus, the critical value of the relative displacement distance and the centroid distributions of the dots on the detector were utilized to establish the relationship between the rays at certain angles and the detector coordinates. Experiments revealed the approximately linear behavior of the centroid variation with the relative displacement distance. With the differential computation method, we increased the precision of traditional calibration 10-5 rad root mean square. The precision of the RHT was increased by approximately 100 nm.
Orientation estimation algorithm applied to high-spin projectiles
NASA Astrophysics Data System (ADS)
Long, D. F.; Lin, J.; Zhang, X. M.; Li, J.
2014-06-01
High-spin projectiles are low cost military weapons. Accurate orientation information is critical to the performance of the high-spin projectiles control system. However, orientation estimators have not been well translated from flight vehicles since they are too expensive, lack launch robustness, do not fit within the allotted space, or are too application specific. This paper presents an orientation estimation algorithm specific for these projectiles. The orientation estimator uses an integrated filter to combine feedback from a three-axis magnetometer, two single-axis gyros and a GPS receiver. As a new feature of this algorithm, the magnetometer feedback estimates roll angular rate of projectile. The algorithm also incorporates online sensor error parameter estimation performed simultaneously with the projectile attitude estimation. The second part of the paper deals with the verification of the proposed orientation algorithm through numerical simulation and experimental tests. Simulations and experiments demonstrate that the orientation estimator can effectively estimate the attitude of high-spin projectiles. Moreover, online sensor calibration significantly enhances the estimation performance of the algorithm.
NASA Astrophysics Data System (ADS)
Alpatov, Boris; Babayan, Pavel; Ershov, Maksim; Strotov, Valery
2016-10-01
This paper describes the implementation of the orientation estimation algorithm in FPGA-based vision system. An approach to estimate an orientation of objects lacking axial symmetry is proposed. Suggested algorithm is intended to estimate orientation of a specific known 3D object based on object 3D model. The proposed orientation estimation algorithm consists of two stages: learning and estimation. Learning stage is devoted to the exploring of studied object. Using 3D model we can gather set of training images by capturing 3D model from viewpoints evenly distributed on a sphere. Sphere points distribution is made by the geosphere principle. Gathered training image set is used for calculating descriptors, which will be used in the estimation stage of the algorithm. The estimation stage is focusing on matching process between an observed image descriptor and the training image descriptors. The experimental research was performed using a set of images of Airbus A380. The proposed orientation estimation algorithm showed good accuracy in all case studies. The real-time performance of the algorithm in FPGA-based vision system was demonstrated.
Multiscale registration algorithm for alignment of meshes
NASA Astrophysics Data System (ADS)
Vadde, Srikanth; Kamarthi, Sagar V.; Gupta, Surendra M.
2004-03-01
Taking a multi-resolution approach, this research work proposes an effective algorithm for aligning a pair of scans obtained by scanning an object's surface from two adjacent views. This algorithm first encases each scan in the pair with an array of cubes of equal and fixed size. For each scan in the pair a surrogate scan is created by the centroids of the cubes that encase the scan. The Gaussian curvatures of points across the surrogate scan pair are compared to find the surrogate corresponding points. If the difference between the Gaussian curvatures of any two points on the surrogate scan pair is less than a predetermined threshold, then those two points are accepted as a pair of surrogate corresponding points. The rotation and translation values between the surrogate scan pair are determined by using a set of surrogate corresponding points. Using the same rotation and translation values the original scan pairs are aligned. The resulting registration (or alignment) error is computed to check the accuracy of the scan alignment. When the registration error becomes acceptably small, the algorithm is terminated. Otherwise the above process is continued with cubes of smaller and smaller sizes until the algorithm is terminated. However at each finer resolution the search space for finding the surrogate corresponding points is restricted to the regions in the neighborhood of the surrogate points that were at found at the preceding coarser level. The surrogate corresponding points, as the resolution becomes finer and finer, converge to the true corresponding points on the original scans. This approach offers three main benefits: it improves the chances of finding the true corresponding points on the scans, minimize the adverse effects of noise in the scans, and reduce the computational load for finding the corresponding points.
Optic disc segmentation for glaucoma screening system using fundus images.
Almazroa, Ahmed; Sun, Weiwei; Alodhayb, Sami; Raahemifar, Kaamran; Lakshminarayanan, Vasudevan
2017-01-01
Segmenting the optic disc (OD) is an important and essential step in creating a frame of reference for diagnosing optic nerve head pathologies such as glaucoma. Therefore, a reliable OD segmentation technique is necessary for automatic screening of optic nerve head abnormalities. The main contribution of this paper is in presenting a novel OD segmentation algorithm based on applying a level set method on a localized OD image. To prevent the blood vessels from interfering with the level set process, an inpainting technique was applied. As well an important contribution was to involve the variations in opinions among the ophthalmologists in detecting the disc boundaries and diagnosing the glaucoma. Most of the previous studies were trained and tested based on only one opinion, which can be assumed to be biased for the ophthalmologist. In addition, the accuracy was calculated based on the number of images that coincided with the ophthalmologists' agreed-upon images, and not only on the overlapping images as in previous studies. The ultimate goal of this project is to develop an automated image processing system for glaucoma screening. The disc algorithm is evaluated using a new retinal fundus image dataset called RIGA (retinal images for glaucoma analysis). In the case of low-quality images, a double level set was applied, in which the first level set was considered to be localization for the OD. Five hundred and fifty images are used to test the algorithm accuracy as well as the agreement among the manual markings of six ophthalmologists. The accuracy of the algorithm in marking the optic disc area and centroid was 83.9%, and the best agreement was observed between the results of the algorithm and manual markings in 379 images.
High dimensional land cover inference using remotely sensed modis data
NASA Astrophysics Data System (ADS)
Glanz, Hunter S.
Image segmentation persists as a major statistical problem, with the volume and complexity of data expanding alongside new technologies. Land cover classification, one of the most studied problems in Remote Sensing, provides an important example of image segmentation whose needs transcend the choice of a particular classification method. That is, the challenges associated with land cover classification pervade the analysis process from data pre-processing to estimation of a final land cover map. Many of the same challenges also plague the task of land cover change detection. Multispectral, multitemporal data with inherent spatial relationships have hardly received adequate treatment due to the large size of the data and the presence of missing values. In this work we propose a novel, concerted application of methods which provide a unified way to estimate model parameters, impute missing data, reduce dimensionality, classify land cover, and detect land cover changes. This comprehensive analysis adopts a Bayesian approach which incorporates prior knowledge to improve the interpretability, efficiency, and versatility of land cover classification and change detection. We explore a parsimonious, parametric model that allows for a natural application of principal components analysis to isolate important spectral characteristics while preserving temporal information. Moreover, it allows us to impute missing data and estimate parameters via expectation-maximization (EM). A significant byproduct of our framework includes a suite of training data assessment tools. To classify land cover, we employ a spanning tree approximation to a lattice Potts prior to incorporate spatial relationships in a judicious way and more efficiently access the posterior distribution of pixel labels. We then achieve exact inference of the labels via the centroid estimator. To detect land cover changes, we develop a new EM algorithm based on the same parametric model. We perform simulation studies to validate our models and methods, and conduct an extensive continental scale case study using MODIS data. The results show that we successfully classify land cover and recover the spatial patterns present in large scale data. Application of our change point method to an area in the Amazon successfully identifies the progression of deforestation through portions of the region.
NASA Astrophysics Data System (ADS)
Letort, Jean; Guilbert, Jocelyn; Cotton, Fabrice; Bondár, István; Cano, Yoann; Vergoz, Julien
2015-06-01
The depth of an earthquake is difficult to estimate because of the trade-off between depth and origin time estimations, and because it can be biased by lateral Earth heterogeneities. To face this challenge, we have developed a new, blind and fully automatic teleseismic depth analysis. The results of this new method do not depend on epistemic uncertainties due to depth-phase picking and identification. The method consists of a modification of the cepstral analysis from Letort et al. and Bonner et al., which aims to detect surface reflected (pP, sP) waves in a signal at teleseismic distances (30°-90°) through the study of the spectral holes in the shape of the signal spectrum. The ability of our automatic method to improve depth estimations is shown by relocation of the recent moderate seismicity of the Guerrero subduction area (Mexico). We have therefore estimated the depth of 152 events using teleseismic data from the IRIS stations and arrays. One advantage of this method is that it can be applied for single stations (from IRIS) as well as for classical arrays. In the Guerrero area, our new cepstral analysis efficiently clusters event locations and provides an improved view of the geometry of the subduction. Moreover, we have also validated our method through relocation of the same events using the new International Seismological Centre (ISC)-locator algorithm, as well as comparing our cepstral depths with the available Harvard-Centroid Moment Tensor (CMT) solutions and the three available ground thrust (GT5) events (where lateral localization is assumed to be well constrained with uncertainty <5 km) for this area. These comparisons indicate an overestimation of focal depths in the ISC catalogue for deeper parts of the subduction, and they show a systematic bias between the estimated cepstral depths and the ISC-locator depths. Using information from the CMT catalogue relating to the predominant focal mechanism for this area, this bias can be explained as a misidentification of sP phases by pP phases, which shows the greater interest for the use of this new automatic cepstral analysis, as it is less sensitive to phase identification.
Automatic pre-processing for an object-oriented distributed hydrological model using GRASS-GIS
NASA Astrophysics Data System (ADS)
Sanzana, P.; Jankowfsky, S.; Branger, F.; Braud, I.; Vargas, X.; Hitschfeld, N.
2012-04-01
Landscapes are very heterogeneous, which impact the hydrological processes occurring in the catchments, especially in the modeling of peri-urban catchments. The Hydrological Response Units (HRUs), resulting from the intersection of different maps, such as land use, soil types and geology, and flow networks, allow the representation of these elements in an explicit way, preserving natural and artificial contours of the different layers. These HRUs are used as model mesh in some distributed object-oriented hydrological models, allowing the application of a topological oriented approach. The connectivity between polygons and polylines provides a detailed representation of the water balance and overland flow in these distributed hydrological models, based on irregular hydro-landscape units. When computing fluxes between these HRUs, the geometrical parameters, such as the distance between the centroid of gravity of the HRUs and the river network, and the length of the perimeter, can impact the realism of the calculated overland, sub-surface and groundwater fluxes. Therefore, it is necessary to process the original model mesh in order to avoid these numerical problems. We present an automatic pre-processing implemented in the open source GRASS-GIS software, for which several Python scripts or some algorithms already available were used, such as the Triangle software. First, some scripts were developed to improve the topology of the various elements, such as snapping of the river network to the closest contours. When data are derived with remote sensing, such as vegetation areas, their perimeter has lots of right angles that were smoothed. Second, the algorithms more particularly address bad-shaped elements of the model mesh such as polygons with narrow shapes, marked irregular contours and/or the centroid outside of the polygons. To identify these elements we used shape descriptors. The convexity index was considered the best descriptor to identify them with a threshold of 0.75. Segmentation procedures were implemented and applied with criteria of homogeneous slope, convexity of the elements and maximum area of the HRUs. These tasks were implemented using a triangulation approach, applying the Triangle software, in order to dissolve the polygons according to the convexity index criteria. The automatic pre-processing was applied to two peri-urban French catchment, the Mercier and Chaudanne catchments, with 7.3 km2 and 4.1 km2 respectively. We show that the optimized mesh allows a substantial improvement of the overland flow pathways, because the segmentation procedure gives a more realistic representation of the drainage network. KEYWORDS: GRASS-GIS, Hydrological Response Units, Automatic processing, Peri-urban catchments, Geometrical Algorithms
Schoenberg, Mike R; Lange, Rael T; Saklofske, Donald H
2007-11-01
Establishing a comparison standard in neuropsychological assessment is crucial to determining change in function. There is no available method to estimate premorbid intellectual functioning for the Wechsler Intelligence Scale for Children-Fourth Edition (WISC-IV). The WISC-IV provided normative data for both American and Canadian children aged 6 to 16 years old. This study developed regression algorithms as a proposed method to estimate full-scale intelligence quotient (FSIQ) for the Canadian WISC-IV. Participants were the Canadian WISC-IV standardization sample (n = 1,100). The sample was randomly divided into two groups (development and validation groups). The development group was used to generate regression algorithms; 1 algorithm only included demographics, and 11 combined demographic variables with WISC-IV subtest raw scores. The algorithms accounted for 18% to 70% of the variance in FSIQ (standard error of estimate, SEE = 8.6 to 14.2). Estimated FSIQ significantly correlated with actual FSIQ (r = .30 to .80), and the majority of individual FSIQ estimates were within +/-10 points of actual FSIQ. The demographic-only algorithm was less accurate than algorithms combining demographic variables with subtest raw scores. The current algorithms yielded accurate estimates of current FSIQ for Canadian individuals aged 6-16 years old. The potential application of the algorithms to estimate premorbid FSIQ is reviewed. While promising, clinical validation of the algorithms in a sample of children and/or adolescents with known neurological dysfunction is needed to establish these algorithms as a premorbid estimation procedure.
Coupled Inertial Navigation and Flush Air Data Sensing Algorithm for Atmosphere Estimation
NASA Technical Reports Server (NTRS)
Karlgaard, Christopher D.; Kutty, Prasad; Schoenenberger, Mark
2016-01-01
This paper describes an algorithm for atmospheric state estimation based on a coupling between inertial navigation and flush air data-sensing pressure measurements. The navigation state is used in the atmospheric estimation algorithm along with the pressure measurements and a model of the surface pressure distribution to estimate the atmosphere using a nonlinear weighted least-squares algorithm. The approach uses a high-fidelity model of atmosphere stored in table-lookup form, along with simplified models propagated along the trajectory within the algorithm to aid the solution. Thus, the method is a reduced-order Kalman filter in which the inertial states are taken from the navigation solution and atmospheric states are estimated in the filter. The algorithm is applied to data from the Mars Science Laboratory entry, descent, and landing from August 2012. Reasonable estimates of the atmosphere are produced by the algorithm. The observability of winds along the trajectory are examined using an index based on the observability Gramian and the pressure measurement sensitivity matrix. The results indicate that bank reversals are responsible for adding information content. The algorithm is applied to the design of the pressure measurement system for the Mars 2020 mission. A linear covariance analysis is performed to assess estimator performance. The results indicate that the new estimator produces more precise estimates of atmospheric states than existing algorithms.
Manifold absolute pressure estimation using neural network with hybrid training algorithm
Selamat, Hazlina; Alimin, Ahmad Jais; Haniff, Mohamad Fadzli
2017-01-01
In a modern small gasoline engine fuel injection system, the load of the engine is estimated based on the measurement of the manifold absolute pressure (MAP) sensor, which took place in the intake manifold. This paper present a more economical approach on estimating the MAP by using only the measurements of the throttle position and engine speed, resulting in lower implementation cost. The estimation was done via two-stage multilayer feed-forward neural network by combining Levenberg-Marquardt (LM) algorithm, Bayesian Regularization (BR) algorithm and Particle Swarm Optimization (PSO) algorithm. Based on the results found in 20 runs, the second variant of the hybrid algorithm yields a better network performance than the first variant of hybrid algorithm, LM, LM with BR and PSO by estimating the MAP closely to the simulated MAP values. By using a valid experimental training data, the estimator network that trained with the second variant of the hybrid algorithm showed the best performance among other algorithms when used in an actual retrofit fuel injection system (RFIS). The performance of the estimator was also validated in steady-state and transient condition by showing a closer MAP estimation to the actual value. PMID:29190779
Raknes, Guttorm; Hunskaar, Steinar
2014-01-01
We describe a method that uses crowdsourced postcode coordinates and Google maps to estimate average distance and travel time for inhabitants of a municipality to a casualty clinic in Norway. The new method was compared with methods based on population centroids, median distance and town hall location, and we used it to examine how distance affects the utilisation of out-of-hours primary care services. At short distances our method showed good correlation with mean travel time and distance. The utilisation of out-of-hours services correlated with postcode based distances similar to previous research. The results show that our method is a reliable and useful tool for estimating average travel distances and travel times.
Hallisey, Elaine; Tai, Eric; Berens, Andrew; Wilt, Grete; Peipins, Lucy; Lewis, Brian; Graham, Shannon; Flanagan, Barry; Lunsford, Natasha Buchanan
2017-08-07
Transforming spatial data from one scale to another is a challenge in geographic analysis. As part of a larger, primary study to determine a possible association between travel barriers to pediatric cancer facilities and adolescent cancer mortality across the United States, we examined methods to estimate mortality within zones at varying distances from these facilities: (1) geographic centroid assignment, (2) population-weighted centroid assignment, (3) simple areal weighting, (4) combined population and areal weighting, and (5) geostatistical areal interpolation. For the primary study, we used county mortality counts from the National Center for Health Statistics (NCHS) and population data by census tract for the United States to estimate zone mortality. In this paper, to evaluate the five mortality estimation methods, we employed address-level mortality data from the state of Georgia in conjunction with census data. Our objective here is to identify the simplest method that returns accurate mortality estimates. The distribution of Georgia county adolescent cancer mortality counts mirrors the Poisson distribution of the NCHS counts for the U.S. Likewise, zone value patterns, along with the error measures of hierarchy and fit, are similar for the state and the nation. Therefore, Georgia data are suitable for methods testing. The mean absolute value arithmetic differences between the observed counts for Georgia and the five methods were 5.50, 5.00, 4.17, 2.74, and 3.43, respectively. Comparing the methods through paired t-tests of absolute value arithmetic differences showed no statistical difference among the methods. However, we found a strong positive correlation (r = 0.63) between estimated Georgia mortality rates and combined weighting rates at zone level. Most importantly, Bland-Altman plots indicated acceptable agreement between paired arithmetic differences of Georgia rates and combined population and areal weighting rates. This research contributes to the literature on areal interpolation, demonstrating that combined population and areal weighting, compared to other tested methods, returns the most accurate estimates of mortality in transforming small counts by county to aggregated counts for large, non-standard study zones. This conceptually simple cartographic method should be of interest to public health practitioners and researchers limited to analysis of data for relatively large enumeration units.
(2-{[2-(diphenyl-phosphino)phen-yl]thio}-phen-yl)diphenyl-phosphine sulfide.
Alvarez-Larena, Angel; Martinez-Cuevas, Francisco J; Flor, Teresa; Real, Juli
2012-11-01
In the title compound, C(36)H(28)P(2)S(2), the dihedral angle between the central benzene rings is 66.95 (13)°. In the crystal, molecules are linked via C(ar)-H⋯π and π-π inter-actions [shortest centroid-centroid distance between benzene rings = 3.897 (2) Å].
Space shuttle propulsion parameter estimation using optimal estimation techniques
NASA Technical Reports Server (NTRS)
1983-01-01
The first twelve system state variables are presented with the necessary mathematical developments for incorporating them into the filter/smoother algorithm. Other state variables, i.e., aerodynamic coefficients can be easily incorporated into the estimation algorithm, representing uncertain parameters, but for initial checkout purposes are treated as known quantities. An approach for incorporating the NASA propulsion predictive model results into the optimal estimation algorithm was identified. This approach utilizes numerical derivatives and nominal predictions within the algorithm with global iterations of the algorithm. The iterative process is terminated when the quality of the estimates provided no longer significantly improves.
Method of wavefront tilt correction for optical heterodyne detection systems under strong turbulence
NASA Astrophysics Data System (ADS)
Xiang, Jing-song; Tian, Xin; Pan, Le-chun
2014-07-01
Atmospheric turbulence decreases the heterodyne mixing efficiency of the optical heterodyne detection systems. Wavefront tilt correction is often used to improve the optical heterodyne mixing efficiency. But the performance of traditional centroid tracking tilt correction is poor under strong turbulence conditions. In this paper, a tilt correction method which tracking the peak value of laser spot on focal plane is proposed. Simulation results show that, under strong turbulence conditions, the performance of peak value tracking tilt correction is distinctly better than that of traditional centroid tracking tilt correction method, and the phenomenon of large antenna's performance inferior to small antenna's performance which may be occurred in centroid tracking tilt correction method can also be avoid in peak value tracking tilt correction method.
Monte Carlo Volcano Seismic Moment Tensors
NASA Astrophysics Data System (ADS)
Waite, G. P.; Brill, K. A.; Lanza, F.
2015-12-01
Inverse modeling of volcano seismic sources can provide insight into the geometry and dynamics of volcanic conduits. But given the logistical challenges of working on an active volcano, seismic networks are typically deficient in spatial and temporal coverage; this potentially leads to large errors in source models. In addition, uncertainties in the centroid location and moment-tensor components, including volumetric components, are difficult to constrain from the linear inversion results, which leads to a poor understanding of the model space. In this study, we employ a nonlinear inversion using a Monte Carlo scheme with the objective of defining robustly resolved elements of model space. The model space is randomized by centroid location and moment tensor eigenvectors. Point sources densely sample the summit area and moment tensors are constrained to a randomly chosen geometry within the inversion; Green's functions for the random moment tensors are all calculated from modeled single forces, making the nonlinear inversion computationally reasonable. We apply this method to very-long-period (VLP) seismic events that accompany minor eruptions at Fuego volcano, Guatemala. The library of single force Green's functions is computed with a 3D finite-difference modeling algorithm through a homogeneous velocity-density model that includes topography, for a 3D grid of nodes, spaced 40 m apart, within the summit region. The homogenous velocity and density model is justified by long wavelength of VLP data. The nonlinear inversion reveals well resolved model features and informs the interpretation through a better understanding of the possible models. This approach can also be used to evaluate possible station geometries in order to optimize networks prior to deployment.
Bimorph deformable mirror: an appropriate wavefront corrector for retinal imaging?
NASA Astrophysics Data System (ADS)
Laut, Sophie; Jones, Steve; Park, Hyunkyu; Horsley, David A.; Olivier, Scot; Werner, John S.
2005-11-01
The purpose of this study was to evaluate the performance of a bimorph deformable mirror from AOptix, inserted into an adaptive optics system designed for in-vivo retinal imaging at high resolution. We wanted to determine its suitability as a wavefront corrector for vision science and ophthalmological instrumentation. We presented results obtained in a closed-loop system, and compared them with previous open-loop performance measurements. Our goal was to obtain precise wavefront reconstruction with rapid convergence of the control algorithm. The quality of the reconstruction was expressed in terms of root-mean-squared wavefront residual error (RMS), and number of frames required to perform compensation. Our instrument used a Hartmann-Shack sensor for the wavefront measurements. We also determined the precision and ability of the deformable mirror to compensate the most common types of aberrations present in the human eye (defocus, cylinder, astigmatism and coma), and the quality of its correction, in terms of maximum amplitude of the corrected wavefront. In addition to wavefront correction, we had also used the closed-loop system to generate an arbitrary aberration pattern by entering the desired Hartmann-Shack centroid locations as input to the AO controller. These centroid locations were computed in Matlab for a user-defined aberration pattern, allowing us to test the ability of the DM to generate and compensate for various aberrations. We conclude that this device, in combination with another DM based on Micro-Electro Mechanical Systems (MEMS) technology, may provide better compensation of the higher-order ocular wavefront aberrations of the human eye
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fei Baowei; Wang Hesheng; Muzic, Raymond F. Jr.
2006-03-15
We are investigating imaging techniques to study the tumor response to photodynamic therapy (PDT). Positron emission tomography (PET) can provide physiological and functional information. High-resolution magnetic resonance imaging (MRI) can provide anatomical and morphological changes. Image registration can combine MRI and PET images for improved tumor monitoring. In this study, we acquired high-resolution MRI and microPET {sup 18}F-fluorodeoxyglucose (FDG) images from C3H mice with RIF-1 tumors that were treated with Pc 4-based PDT. We developed two registration methods for this application. For registration of the whole mouse body, we used an automatic three-dimensional, normalized mutual information algorithm. For tumor registration,more » we developed a finite element model (FEM)-based deformable registration scheme. To assess the quality of whole body registration, we performed slice-by-slice review of both image volumes; manually segmented feature organs, such as the left and right kidneys and the bladder, in each slice; and computed the distance between corresponding centroids. Over 40 volume registration experiments were performed with MRI and microPET images. The distance between corresponding centroids of organs was 1.5{+-}0.4 mm which is about 2 pixels of microPET images. The mean volume overlap ratios for tumors were 94.7% and 86.3% for the deformable and rigid registration methods, respectively. Registration of high-resolution MRI and microPET images combines anatomical and functional information of the tumors and provides a useful tool for evaluating photodynamic therapy.« less
Image-based fall detection and classification of a user with a walking support system
NASA Astrophysics Data System (ADS)
Taghvaei, Sajjad; Kosuge, Kazuhiro
2017-10-01
The classification of visual human action is important in the development of systems that interact with humans. This study investigates an image-based classification of the human state while using a walking support system to improve the safety and dependability of these systems.We categorize the possible human behavior while utilizing a walker robot into eight states (i.e., sitting, standing, walking, and five falling types), and propose two different methods, namely, normal distribution and hidden Markov models (HMMs), to detect and recognize these states. The visual feature for the state classification is the centroid position of the upper body, which is extracted from the user's depth images. The first method shows that the centroid position follows a normal distribution while walking, which can be adopted to detect any non-walking state. The second method implements HMMs to detect and recognize these states. We then measure and compare the performance of both methods. The classification results are employed to control the motion of a passive-type walker (called "RT Walker") by activating its brakes in non-walking states. Thus, the system can be used for sit/stand support and fall prevention. The experiments are performed with four subjects, including an experienced physiotherapist. Results show that the algorithm can be adapted to the new user's motion pattern within 40 s, with a fall detection rate of 96.25% and state classification rate of 81.0%. The proposed method can be implemented to other abnormality detection/classification applications that employ depth image-sensing devices.
Amplitude Control of Solid-State Modulators for Precision Fast Kicker Applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Watson, J A; Anaya, R M; Caporaso, G C
2002-11-15
A solid-state modulator with very fast rise and fall times, pulse width agility, and multi-pulse burst and intra-pulse amplitude adjustment capability for use with high speed electron beam kickers has been designed and tested at LLNL. The modulator uses multiple solid-state modules stacked in an inductive-adder configuration. Amplitude adjustment is provided by controlling individual modules in the adder, and is used to compensate for transverse e-beam motion as well as the dynamic response and beam-induced steering effects associated with the kicker structure. A control algorithm calculates a voltage based on measured e-beam displacement and adjusts the modulator to regulate beammore » centroid position. This paper presents design details of amplitude control along with measured performance data from kicker operation on the ETA-II accelerator at LLNL.« less
Designing broad phononic band gaps for in-plane modes
NASA Astrophysics Data System (ADS)
Li, Yang Fan; Meng, Fei; Li, Shuo; Jia, Baohua; Zhou, Shiwei; Huang, Xiaodong
2018-03-01
Phononic crystals are known as artificial materials that can manipulate the propagation of elastic waves, and one essential feature of phononic crystals is the existence of forbidden frequency range of traveling waves called band gaps. In this paper, we have proposed an easy way to design phononic crystals with large in-plane band gaps. We demonstrated that the gap between two arbitrarily appointed bands of in-plane mode can be formed by employing a certain number of solid or hollow circular rods embedded in a matrix material. Topology optimization has been applied to find the best material distributions within the primitive unit cell with maximal band gap width. Our results reveal that the centroids of optimized rods coincide with the point positions generated by Lloyd's algorithm, which deepens our understandings on the formation mechanism of phononic in-plane band gaps.
Aviation safety research and transportation/hazard avoidance and elimination
NASA Technical Reports Server (NTRS)
Sonnenschein, C. M.; Dimarzio, C.; Clippinger, D.; Toomey, D.
1976-01-01
Data collected by the Scanning Laser Doppler Velocimeter System (SLDVS) was analyzed to determine the feasibility of the SLDVS for monitoring aircraft wake vortices in an airport environment. Data were collected on atmospheric vortices and analyzed. Over 1600 landings were monitored at Kennedy International Airport and by the end of the test period 95 percent of the runs with large aircraft were producing usable results in real time. The transport was determined in real time and post analysis using algorithms which performed centroids on the highest amplitude in the thresholded spectrum. Making use of other parameters of the spectrum, vortex flow fields were studied along with the time histories of peak velocities and amplitudes. The post analysis of the data was accomplished with a CDC-6700 computer using several programs developed for LDV data analysis.
Cherry recognition in natural environment based on the vision of picking robot
NASA Astrophysics Data System (ADS)
Zhang, Qirong; Chen, Shanxiong; Yu, Tingzhong; Wang, Yan
2017-04-01
In order to realize the automatic recognition of cherry in the natural environment, this paper designed a robot vision system recognition method. The first step of this method is to pre-process the cherry image by median filtering. The second step is to identify the colour of the cherry through the 0.9R-G colour difference formula, and then use the Otsu algorithm for threshold segmentation. The third step is to remove noise by using the area threshold. The fourth step is to remove the holes in the cherry image by morphological closed and open operation. The fifth step is to obtain the centroid and contour of cherry by using the smallest external rectangular and the Hough transform. Through this recognition process, we can successfully identify 96% of the cherry without blocking and adhesion.
Design of optical axis jitter control system for multi beam lasers based on FPGA
NASA Astrophysics Data System (ADS)
Ou, Long; Li, Guohui; Xie, Chuanlin; Zhou, Zhiqiang
2018-02-01
A design of optical axis closed-loop control system for multi beam lasers coherent combining based on FPGA was introduced. The system uses piezoelectric ceramics Fast Steering Mirrors (FSM) as actuator, the Fairfield spot detection of multi beam lasers by the high speed CMOS camera for optical detecting, a control system based on FPGA for real-time optical axis jitter suppression. The algorithm for optical axis centroid detecting and PID of anti-Integral saturation were realized by FPGA. Optimize the structure of logic circuit by reuse resource and pipeline, as a result of reducing logic resource but reduced the delay time, and the closed-loop bandwidth increases to 100Hz. The jitter of laser less than 40Hz was reduced 40dB. The cost of the system is low but it works stably.
NASA Astrophysics Data System (ADS)
Qian, Jinfang; Zhang, Changjiang
2014-11-01
An efficient algorithm based on continuous wavelet transform combining with pre-knowledge, which can be used to detect the defect of glass bottle mouth, is proposed. Firstly, under the condition of ball integral light source, a perfect glass bottle mouth image is obtained by Japanese Computar camera through the interface of IEEE-1394b. A single threshold method based on gray level histogram is used to obtain the binary image of the glass bottle mouth. In order to efficiently suppress noise, moving average filter is employed to smooth the histogram of original glass bottle mouth image. And then continuous wavelet transform is done to accurately determine the segmentation threshold. Mathematical morphology operations are used to get normal binary bottle mouth mask. A glass bottle to be detected is moving to the detection zone by conveyor belt. Both bottle mouth image and binary image are obtained by above method. The binary image is multiplied with normal bottle mask and a region of interest is got. Four parameters (number of connected regions, coordinate of centroid position, diameter of inner cycle, and area of annular region) can be computed based on the region of interest. Glass bottle mouth detection rules are designed by above four parameters so as to accurately detect and identify the defect conditions of glass bottle. Finally, the glass bottles of Coca-Cola Company are used to verify the proposed algorithm. The experimental results show that the proposed algorithm can accurately detect the defect conditions of the glass bottles and have 98% detecting accuracy.
Travel Times, Streamflow Velocities, and Dispersion Rates in the Yellowstone River, Montana
McCarthy, Peter M.
2009-01-01
The Yellowstone River is a vital natural resource to the residents of southeastern Montana and is a primary source of water for irrigation and recreation and the primary source of municipal water for several cities. The Yellowstone River valley is the primary east-west transportation corridor through southern Montana. This complex of infrastructure makes the Yellowstone River especially vulnerable to accidental spills from various sources such as tanker cars and trucks. In 2008, the U.S. Geological Survey (USGS), in cooperation with the Montana Department of Environmental Quality, initiated a dye-tracer study to determine instream travel times, streamflow velocities, and dispersion rates for the Yellowstone River from Lockwood to Glendive, Montana. The purpose of this report is to describe the results of this study and summarize data collected at each of the measurement sites between Lockwood and Glendive. This report also compares the results of this study to estimated travel times from a transport model developed by the USGS for a previous study. For this study, Rhodamine WT dye was injected at four locations in late September and early October 2008 during reasonably steady streamflow conditions. Streamflows ranged from 3,490 to 3,770 cubic feet per second upstream from the confluence of the Bighorn River and ranged from 6,520 to 7,570 cubic feet per second downstream from the confluence of the Bighorn River. Mean velocities were calculated for each subreach between measurement sites for the leading edge, peak concentration, centroid, and trailing edge at 10 percent of the peak concentration. Calculated velocities for the centroid of the dye plume for subreaches that were completely laterally mixed ranged from 1.83 to 3.18 ft/s within the study reach from Lockwood Bridge to Glendive Bridge. The mean of the completely mixed centroid velocity for the entire study reach, excluding the subreach between Forsyth Bridge and Cartersville Dam, was 2.80 ft/s. Longitudinal dispersion rates of the dye plume for this study ranged from 0.06 ft/s for the subreach upstream from Forsyth Bridge to 2.25 ft/s for the subreach upstream from Calyspo Bridge for subreaches where the dye was completely laterally mixed. A relation was determined between travel time of the peak concentration and time for the dye plume to pass a site (duration). This relation can be used to estimate when the receding concentration of a potential contaminant reaches 10 percent of its peak concentration for accidental spills into the Yellowstone River. Data from this dye-tracer study were used to evaluate velocity and concentration estimates from a transport model developed as part of an earlier USGS study. Comparison of the estimated and calculated velocities for the study reach indicate that the transport model estimates the velocities of the Yellowstone River between Huntley Bridge and Glendive Bridge with reasonable accuracy. Velocities of the peak concentration of the dye plume calculated for this study averaged 10 percent faster than the most probable velocities and averaged 12 percent slower than the maximum probable velocities estimated from the transport model. Peak Rhodamine WT dye concentrations were consistently lower than the transport model estimates except for the most upstream subreach of each dye injection. The most upstream subreach of each dye injection is expected to have a higher concentration because of incomplete lateral mixing. Lower measured peak concentrations for all other sites were expected because Rhodamine WT dye deteriorates when exposed to sunlight and will sorb onto the streambanks and stream bottom. Velocity-streamflow relations developed by using routine streamflow measurements at USGS gaging stations and the transport model can be used to estimate mean streamflow velocities throughout a range of streamflows. The variation in these velocity-streamflow relations emphasizes the uncertainty in estimating the mean streamflow veloc
`Skinny Milky Way please', says Sagittarius
NASA Astrophysics Data System (ADS)
Gibbons, S. L. J.; Belokurov, V.; Evans, N. W.
2014-12-01
Motivated by recent observations of the Sagittarius stream, we devise a rapid algorithm to generate faithful representations of the centroids of stellar tidal streams formed in a disruption of a progenitor of an arbitrary mass in an arbitrary potential. Our method works by releasing swarms of test particles at the Lagrange points around the satellite and subsequently evolving them in a combined potential of the host and the progenitor. We stress that the action of the progenitor's gravity is crucial to making streams that look almost indistinguishable from the N-body realizations, as indeed ours do. The method is tested on mock stream data in three different Milky Way potentials with increasing complexity, and is shown to deliver unbiased inference on the Galactic mass distribution out to large radii. When applied to the observations of the Sagittarius stream, our model gives a natural explanation of the stream's apocentric distances and the differential orbital precession. We, therefore, provide a new independent measurement of the Galactic mass distribution beyond 50 kpc. The Sagittarius stream model favours a light Milky Way with the mass 4.1 ± 0.4 × 1011 M⊙ at 100 kpc, which can be extrapolated to 5.6 ± 1.2 × 1011 M⊙ at 200 kpc. Such a low mass for the Milky Way Galaxy is in good agreement with estimates from the kinematics of halo stars and from the satellite galaxies (once Leo I is removed from the sample). It entirely removes the `Too Big To Fail Problem'.
NASA Astrophysics Data System (ADS)
Ortega, R.; Gutierrez, E.; Carciumaru, D. D.; Huesca-Perez, E.
2017-12-01
We present a method to compute the conditional and no-conditional probability density function (PDF) of the finite fault distance distribution (FFDD). Two cases are described: lines and areas. The case of lines has a simple analytical solution while, in the case of areas, the geometrical probability of a fault based on the strike, dip, and fault segment vertices is obtained using the projection of spheres in a piecewise rectangular surface. The cumulative distribution is computed by measuring the projection of a sphere of radius r in an effective area using an algorithm that estimates the area of a circle within a rectangle. In addition, we introduce the finite fault distance metrics. This distance is the distance where the maximum stress release occurs within the fault plane and generates a peak ground motion. Later, we can apply the appropriate ground motion prediction equations (GMPE) for PSHA. The conditional probability of distance given magnitude is also presented using different scaling laws. A simple model of constant distribution of the centroid at the geometrical mean is discussed, in this model hazard is reduced at the edges because the effective size is reduced. Nowadays there is a trend of using extended source distances in PSHA, however it is not possible to separate the fault geometry from the GMPE. With this new approach, it is possible to add fault rupture models separating geometrical and propagation effects.
Single-camera three-dimensional tracking of natural particulate and zooplankton
NASA Astrophysics Data System (ADS)
Troutman, Valerie A.; Dabiri, John O.
2018-07-01
We develop and characterize an image processing algorithm to adapt single-camera defocusing digital particle image velocimetry (DDPIV) for three-dimensional (3D) particle tracking velocimetry (PTV) of natural particulates, such as those present in the ocean. The conventional DDPIV technique is extended to facilitate tracking of non-uniform, non-spherical particles within a volume depth an order of magnitude larger than current single-camera applications (i.e. 10 cm × 10 cm × 24 cm depth) by a dynamic template matching method. This 2D cross-correlation method does not rely on precise determination of the centroid of the tracked objects. To accommodate the broad range of particle number densities found in natural marine environments, the performance of the measurement technique at higher particle densities has been improved by utilizing the time-history of tracked objects to inform 3D reconstruction. The developed processing algorithms were analyzed using synthetically generated images of flow induced by Hill’s spherical vortex, and the capabilities of the measurement technique were demonstrated empirically through volumetric reconstructions of the 3D trajectories of particles and highly non-spherical, 5 mm zooplankton.
Efficient architecture for spike sorting in reconfigurable hardware.
Hwang, Wen-Jyi; Lee, Wei-Hao; Lin, Shiow-Jyu; Lai, Sheng-Ying
2013-11-01
This paper presents a novel hardware architecture for fast spike sorting. The architecture is able to perform both the feature extraction and clustering in hardware. The generalized Hebbian algorithm (GHA) and fuzzy C-means (FCM) algorithm are used for feature extraction and clustering, respectively. The employment of GHA allows efficient computation of principal components for subsequent clustering operations. The FCM is able to achieve near optimal clustering for spike sorting. Its performance is insensitive to the selection of initial cluster centers. The hardware implementations of GHA and FCM feature low area costs and high throughput. In the GHA architecture, the computation of different weight vectors share the same circuit for lowering the area costs. Moreover, in the FCM hardware implementation, the usual iterative operations for updating the membership matrix and cluster centroid are merged into one single updating process to evade the large storage requirement. To show the effectiveness of the circuit, the proposed architecture is physically implemented by field programmable gate array (FPGA). It is embedded in a System-on-Chip (SOC) platform for performance measurement. Experimental results show that the proposed architecture is an efficient spike sorting design for attaining high classification correct rate and high speed computation.
Galván-Tejada, Carlos E.; Zanella-Calzada, Laura A.; Galván-Tejada, Jorge I.; Celaya-Padilla, José M.; Gamboa-Rosales, Hamurabi; Garza-Veloz, Idalia; Martinez-Fierro, Margarita L.
2017-01-01
Breast cancer is an important global health problem, and the most common type of cancer among women. Late diagnosis significantly decreases the survival rate of the patient; however, using mammography for early detection has been demonstrated to be a very important tool increasing the survival rate. The purpose of this paper is to obtain a multivariate model to classify benign and malignant tumor lesions using a computer-assisted diagnosis with a genetic algorithm in training and test datasets from mammography image features. A multivariate search was conducted to obtain predictive models with different approaches, in order to compare and validate results. The multivariate models were constructed using: Random Forest, Nearest centroid, and K-Nearest Neighbor (K-NN) strategies as cost function in a genetic algorithm applied to the features in the BCDR public databases. Results suggest that the two texture descriptor features obtained in the multivariate model have a similar or better prediction capability to classify the data outcome compared with the multivariate model composed of all the features, according to their fitness value. This model can help to reduce the workload of radiologists and present a second opinion in the classification of tumor lesions. PMID:28216571
Galván-Tejada, Carlos E; Zanella-Calzada, Laura A; Galván-Tejada, Jorge I; Celaya-Padilla, José M; Gamboa-Rosales, Hamurabi; Garza-Veloz, Idalia; Martinez-Fierro, Margarita L
2017-02-14
Breast cancer is an important global health problem, and the most common type of cancer among women. Late diagnosis significantly decreases the survival rate of the patient; however, using mammography for early detection has been demonstrated to be a very important tool increasing the survival rate. The purpose of this paper is to obtain a multivariate model to classify benign and malignant tumor lesions using a computer-assisted diagnosis with a genetic algorithm in training and test datasets from mammography image features. A multivariate search was conducted to obtain predictive models with different approaches, in order to compare and validate results. The multivariate models were constructed using: Random Forest, Nearest centroid, and K-Nearest Neighbor (K-NN) strategies as cost function in a genetic algorithm applied to the features in the BCDR public databases. Results suggest that the two texture descriptor features obtained in the multivariate model have a similar or better prediction capability to classify the data outcome compared with the multivariate model composed of all the features, according to their fitness value. This model can help to reduce the workload of radiologists and present a second opinion in the classification of tumor lesions.
NASA Astrophysics Data System (ADS)
Pishravian, Arash; Aghabozorgi Sahaf, Masoud Reza
2012-12-01
In this paper speech-music separation using Blind Source Separation is discussed. The separating algorithm is based on the mutual information minimization where the natural gradient algorithm is used for minimization. In order to do that, score function estimation from observation signals (combination of speech and music) samples is needed. The accuracy and the speed of the mentioned estimation will affect on the quality of the separated signals and the processing time of the algorithm. The score function estimation in the presented algorithm is based on Gaussian mixture based kernel density estimation method. The experimental results of the presented algorithm on the speech-music separation and comparing to the separating algorithm which is based on the Minimum Mean Square Error estimator, indicate that it can cause better performance and less processing time
Tsanas, Athanasios; Zañartu, Matías; Little, Max A.; Fox, Cynthia; Ramig, Lorraine O.; Clifford, Gari D.
2014-01-01
There has been consistent interest among speech signal processing researchers in the accurate estimation of the fundamental frequency (F0) of speech signals. This study examines ten F0 estimation algorithms (some well-established and some proposed more recently) to determine which of these algorithms is, on average, better able to estimate F0 in the sustained vowel /a/. Moreover, a robust method for adaptively weighting the estimates of individual F0 estimation algorithms based on quality and performance measures is proposed, using an adaptive Kalman filter (KF) framework. The accuracy of the algorithms is validated using (a) a database of 117 synthetic realistic phonations obtained using a sophisticated physiological model of speech production and (b) a database of 65 recordings of human phonations where the glottal cycles are calculated from electroglottograph signals. On average, the sawtooth waveform inspired pitch estimator and the nearly defect-free algorithms provided the best individual F0 estimates, and the proposed KF approach resulted in a ∼16% improvement in accuracy over the best single F0 estimation algorithm. These findings may be useful in speech signal processing applications where sustained vowels are used to assess vocal quality, when very accurate F0 estimation is required. PMID:24815269
Gui, Guan; Chen, Zhang-xin; Xu, Li; Wan, Qun; Huang, Jiyan; Adachi, Fumiyuki
2014-01-01
Channel estimation problem is one of the key technical issues in sparse frequency-selective fading multiple-input multiple-output (MIMO) communication systems using orthogonal frequency division multiplexing (OFDM) scheme. To estimate sparse MIMO channels, sparse invariable step-size normalized least mean square (ISS-NLMS) algorithms were applied to adaptive sparse channel estimation (ACSE). It is well known that step-size is a critical parameter which controls three aspects: algorithm stability, estimation performance, and computational cost. However, traditional methods are vulnerable to cause estimation performance loss because ISS cannot balance the three aspects simultaneously. In this paper, we propose two stable sparse variable step-size NLMS (VSS-NLMS) algorithms to improve the accuracy of MIMO channel estimators. First, ASCE is formulated in MIMO-OFDM systems. Second, different sparse penalties are introduced to VSS-NLMS algorithm for ASCE. In addition, difference between sparse ISS-NLMS algorithms and sparse VSS-NLMS ones is explained and their lower bounds are also derived. At last, to verify the effectiveness of the proposed algorithms for ASCE, several selected simulation results are shown to prove that the proposed sparse VSS-NLMS algorithms can achieve better estimation performance than the conventional methods via mean square error (MSE) and bit error rate (BER) metrics.
Gui, Guan; Chen, Zhang-xin; Xu, Li; Wan, Qun; Huang, Jiyan; Adachi, Fumiyuki
2014-01-01
Channel estimation problem is one of the key technical issues in sparse frequency-selective fading multiple-input multiple-output (MIMO) communication systems using orthogonal frequency division multiplexing (OFDM) scheme. To estimate sparse MIMO channels, sparse invariable step-size normalized least mean square (ISS-NLMS) algorithms were applied to adaptive sparse channel estimation (ACSE). It is well known that step-size is a critical parameter which controls three aspects: algorithm stability, estimation performance, and computational cost. However, traditional methods are vulnerable to cause estimation performance loss because ISS cannot balance the three aspects simultaneously. In this paper, we propose two stable sparse variable step-size NLMS (VSS-NLMS) algorithms to improve the accuracy of MIMO channel estimators. First, ASCE is formulated in MIMO-OFDM systems. Second, different sparse penalties are introduced to VSS-NLMS algorithm for ASCE. In addition, difference between sparse ISS-NLMS algorithms and sparse VSS-NLMS ones is explained and their lower bounds are also derived. At last, to verify the effectiveness of the proposed algorithms for ASCE, several selected simulation results are shown to prove that the proposed sparse VSS-NLMS algorithms can achieve better estimation performance than the conventional methods via mean square error (MSE) and bit error rate (BER) metrics. PMID:25089286
NASA Astrophysics Data System (ADS)
Pollyea, R.; Mohammadi, N.; Taylor, J. E.
2017-12-01
The annual earthquake rate in Oklahoma increased dramatically between 2009 and 2016, owing in large part to the rapid proliferation of salt water disposal wells associated with unconventional oil and gas recovery. This study presents a geospatial analysis of earthquake occurrence and SWD injection volume within a 68,420 km2 area in north-central Oklahoma between 2011 and 2016. The spatial co-variability of earthquake occurrence and SWD injection volume is analyzed for each year of the study by calculating the geographic centroid for both earthquake epicenter and volume-weighted well location. In addition, the spatial cross correlation between earthquake occurrence and SWD volume is quantified by calculating the cross semivariogram annually for a 9.6 km × 9.6 km (6 mi × 6 mi) grid over the study area. Results from these analyses suggest that the relationship between volume-weighted well centroids and earthquake centroids generally follow pressure diffusion space-time scaling, and the volume-weighted well centroid predicts the geographic earthquake centroid within a 1σ radius of gyration. The cross semivariogram calculations show that SWD injection volume and earthquake occurrence are spatially cross correlated between 2014 and 2016. These results also show that the strength of cross correlation decreased from 2015 to 2016; however, the cross correlation length scale remains unchanged at 125 km. This suggests that earthquake mitigation efforts have been moderately successful in decreasing the strength of cross correlation between SWD volume and earthquake occurrence near-field, but the far-field contribution of SWD injection volume to earthquake occurrence remains unaffected.
NASA Astrophysics Data System (ADS)
Yuan, Ying; Li, Jicun; Li, Xin-Zheng; Wang, Feng
2018-05-01
The development of effective centroid potentials (ECPs) is explored with both the constrained-centroid and quasi-adiabatic force matching using liquid water as a test system. A trajectory integrated with the ECP is free of statistical noises that would be introduced when the centroid potential is approximated on the fly with a finite number of beads. With the reduced cost of ECP, challenging experimental properties can be studied in the spirit of centroid molecular dynamics. The experimental number density of H2O is 0.38% higher than that of D2O. With the ECP, the H2O number density is predicted to be 0.42% higher, when the dispersion term is not refit. After correction of finite size effects, the diffusion constant of H2O is found to be 21% higher than that of D2O, which is in good agreement with the 29.9% higher diffusivity for H2O observed experimentally. Although the ECP is also able to capture the redshifts of both the OH and OD stretching modes in liquid water, there are a number of properties that a classical simulation with the ECP will not be able to recover. For example, the heat capacities of H2O and D2O are predicted to be almost identical and higher than the experimental values. Such a failure is simply a result of not properly treating quantized vibrational energy levels when the trajectory is propagated with classical mechanics. Several limitations of the ECP based approach without bead population reconstruction are discussed.
Yañez-Arenas, Carlos; Peterson, A. Townsend; Mokondoko, Pierre; Rojas-Soto, Octavio; Martínez-Meyer, Enrique
2014-01-01
Background Many authors have claimed that snakebite risk is associated with human population density, human activities, and snake behavior. Here we analyzed whether environmental suitability of vipers can be used as an indicator of snakebite risk. We tested several hypotheses to explain snakebite incidence, through the construction of models incorporating both environmental suitability and socioeconomic variables in Veracruz, Mexico. Methodology/Principal Findings Ecological niche modeling (ENM) was used to estimate potential geographic and ecological distributions of nine viper species' in Veracruz. We calculated the distance to the species' niche centroid (DNC); this distance may be associated with a prediction of abundance. We found significant inverse relationships between snakebites and DNCs of common vipers (Crotalus simus and Bothrops asper), explaining respectively 15% and almost 35% of variation in snakebite incidence. Additionally, DNCs for these two vipers, in combination with marginalization of human populations, accounted for 76% of variation in incidence. Conclusions/Significance Our results suggest that niche modeling and niche-centroid distance approaches can be used to mapping distributions of environmental suitability for venomous snakes; combining this ecological information with socioeconomic factors may help with inferring potential risk areas for snakebites, since hospital data are often biased (especially when incidences are low). PMID:24963989
Estimating Selected Streamflow Statistics Representative of 1930-2002 in West Virginia
Wiley, Jeffrey B.
2008-01-01
Regional equations and procedures were developed for estimating 1-, 3-, 7-, 14-, and 30-day 2-year; 1-, 3-, 7-, 14-, and 30-day 5-year; and 1-, 3-, 7-, 14-, and 30-day 10-year hydrologically based low-flow frequency values for unregulated streams in West Virginia. Regional equations and procedures also were developed for estimating the 1-day, 3-year and 4-day, 3-year biologically based low-flow frequency values; the U.S. Environmental Protection Agency harmonic-mean flows; and the 10-, 25-, 50-, 75-, and 90-percent flow-duration values. Regional equations were developed using ordinary least-squares regression using statistics from 117 U.S. Geological Survey continuous streamflow-gaging stations as dependent variables and basin characteristics as independent variables. Equations for three regions in West Virginia - North, South-Central, and Eastern Panhandle - were determined. Drainage area, precipitation, and longitude of the basin centroid are significant independent variables in one or more of the equations. Estimating procedures are presented for determining statistics at a gaging station, a partial-record station, and an ungaged location. Examples of some estimating procedures are presented.
How frequently will the Surface Water and Ocean Topography (SWOT) observe floods?
NASA Astrophysics Data System (ADS)
Frasson, R. P. M.; Schumann, G.
2017-12-01
The SWOT mission will measure river width and water surface elevations of rivers wider than 100 m. As the data gathered by this mission will be freely available, it can be of great use for flood modeling, especially in areas where streamgage networks are exceedingly sparse, or when data sharing barriers prevent the timely access to information. Despite having world-wide coverage, SWOT's temporal sampling is limited, with most locations being revisited once or twice every 21 days. Our objective is to evaluate which fraction of world-wide floods SWOT will observe and how many observations per event the satellite will likely obtain. We take advantage of the extensive database of floods constructed by the Dartmouth Flood Observatory, who, since 1985, searches through news sources and governmental agencies, and more recently remote sensing imagery for flood information, including flood duration, location and affected area. We cross-referenced the flood locations in the DFO archive with the SWOT prototype prior database of river centerlines and the anticipated satellite's orbit to identify how many of the SWOT swaths were located within 10 km, 20 km, and 50 km from a flood centroid. Subsequently, we estimated the probability that SWOT would have at least one observation of a flood event per distance bin by multiplying the number of swaths in the distance bin by the flood duration divided by the SWOT orbit repeat period. Our analysis contemplated 132 world-wide floods recorded between May 2016 and May 2017. From these, 29, 52, and 86 floods had at least a 50% probability of having one overpass within 10 km, 20 km, and 50 km respectively. Moreover, after excluding flood events with no river centerlines within 10 km of its centroid, the average number of swaths within 10 km of a flood centroid was 1.79, indicating that in the 37 flood events that were likely caused by river flooding, at least one measurement was guaranteed to happen during the event.
Huang, Qiongyu; Sauer, John R; Dubayah, Ralph O
2017-09-01
Shifts in species distributions are major fingerprint of climate change. Examining changes in species abundance structures at a continental scale enables robust evaluation of climate change influences, but few studies have conducted these evaluations due to limited data and methodological constraints. In this study, we estimate temporal changes in abundance from North American Breeding Bird Survey data at the scale of physiographic strata to examine the relative influence of different components of climatic factors and evaluate the hypothesis that shifting species distributions are multidirectional in resident bird species in North America. We quantify the direction and velocity of the abundance shifts of 57 permanent resident birds over 44 years using a centroid analysis. For species with significant abundance shifts in the centroid analysis, we conduct a more intensive correlative analysis to identify climate components most strongly associated with composite change of abundance within strata. Our analysis focus on two contrasts: the relative importance of climate extremes vs. averages, and of temperature vs. precipitation in strength of association with abundance change. Our study shows that 36 species had significant abundance shifts over the study period. The average velocity of the centroid is 5.89 km·yr -1 . The shifted distance on average covers 259 km, 9% of range extent. Our results strongly suggest that the climate change fingerprint in studied avian distributions is multidirectional. Among 6 directions with significant abundance shifts, the northwestward shift was observed in the largest number of species (n = 13). The temperature/average climate model consistently has greater predictive ability than the precipitation/extreme climate model in explaining strata-level abundance change. Our study shows heterogeneous avian responses to recent environmental changes. It highlights needs for more species-specific approaches to examine contributing factors to recent distributional changes and for comprehensive conservation planning for climate change adaptation. Published 2017. This article is a U.S. Government work and is in the public domain in the USA.
Abercrombie, R.E.; Webb, T.H.; Robinson, R.; McGinty, P.J.; Mori, J.J.; Beavan, R.J.
2000-01-01
The 1994 Arthur's Pass earthquake (Mw6.7) is the largest in a recent sequence of earthquakes in the central South Island, New Zealand. No surface rupture was observed the aftershock distribution was complex, and routine methods of obtaining the faulting orientation of this earthquake proved contradictory. We use a range of data and techniques to obtain our preferred solution, which has a centroid depth of 5 km, Mo=1.3??1019 N m, and a strike, dip, and rake of 221??, 47??, 112??, respectively. Discrepancies between this solution and the Harvard centroid moment tensor, together with the Global Positioning System (GPS) observations and unusual aftershock distribution, suggest that the rupture may not have occurred on a planar fault. A second, strike slip, subevent on a more northerly striking plane is suggested by these data but neither the body wave modeling nor regional broadband recordings show any complexity or late subevents. We relocate the aftershocks using both one-dimensional and three-dimensional velocity inversions. The depth range of the aftershocks (1-10 km) agrees well with the preferred mainshock centroid depth. The aftershocks near the hypocenter suggest a structure dipping toward the NW, which we interpret to be the mainshock fault plane. This structure and the Harper fault, ???15 km to the south appear to have acted as boundaries to the extensive aftershock zone trending NNW-SSE Most of the ML???5 aftershocks, including the two largest (ML6.1 and ML5.7), clustered near the Harper fault and have strike slip mechanisms consistent with motion on this fault and its conjugates. Forward modeling of the GPS data suggests that a reverse slip mainshock, combined with strike slip aftershock faulting in the south, is able to match the observed displacements. The occurrence of this earthquake sequence implies that the level of seismic hazard in the central South Island is greater than previous estimates. Copyright 2000 by the American Geophysical Union.
Temporal variations in the position of the heliospheric equator
NASA Astrophysics Data System (ADS)
Obridko, V. N.; Shelting, B. D.
2008-08-01
It is shown that the centroid of the heliospheric equator undergoes quasi-periodic oscillations. During the minimum of the 11-year cycle, the centroid shifts southwards (the so-called bashful-ballerina effect). The direction of the shift reverses during the solar maximum. The solar quadrupole is responsible for this effect. The shift is compared with the tilt of the heliospheric current sheet.
Event Centroiding Applied to Energy-Resolved Neutron Imaging at LANSCE
Borges, Nicholas; Losko, Adrian; Vogel, Sven
2018-02-13
The energy-dependence of the neutron cross section provides vastly different contrast mechanisms than polychromatic neutron radiography if neutron energies can be selected for imaging applications. In recent years, energy-resolved neutron imaging (ERNI) with epi-thermal neutrons, utilizing neutron absorption resonances for contrast as well as for quantitative density measurements, was pioneered at the Flight Path 5 beam line at LANSCE and continues to be refined. In this work, we present event centroiding, i.e., the determination of the center-of-gravity of a detection event on an imaging detector to allow sub-pixel spatial resolution and apply it to the many frames collected for energy-resolvedmore » neutron imaging at a pulsed neutron source. While event centroiding was demonstrated at thermal neutron sources, it has not been applied to energy-resolved neutron imaging, where the energy resolution requires to be preserved, and we present a quantification of the possible resolution as a function of neutron energy. For the 55 μm pixel size of the detector used for this study, we found a resolution improvement from ~80 μm to ~22 μm using pixel centroiding while fully preserving the energy resolution.« less
Event Centroiding Applied to Energy-Resolved Neutron Imaging at LANSCE
DOE Office of Scientific and Technical Information (OSTI.GOV)
Borges, Nicholas; Losko, Adrian; Vogel, Sven
The energy-dependence of the neutron cross section provides vastly different contrast mechanisms than polychromatic neutron radiography if neutron energies can be selected for imaging applications. In recent years, energy-resolved neutron imaging (ERNI) with epi-thermal neutrons, utilizing neutron absorption resonances for contrast as well as for quantitative density measurements, was pioneered at the Flight Path 5 beam line at LANSCE and continues to be refined. In this work, we present event centroiding, i.e., the determination of the center-of-gravity of a detection event on an imaging detector to allow sub-pixel spatial resolution and apply it to the many frames collected for energy-resolvedmore » neutron imaging at a pulsed neutron source. While event centroiding was demonstrated at thermal neutron sources, it has not been applied to energy-resolved neutron imaging, where the energy resolution requires to be preserved, and we present a quantification of the possible resolution as a function of neutron energy. For the 55 μm pixel size of the detector used for this study, we found a resolution improvement from ~80 μm to ~22 μm using pixel centroiding while fully preserving the energy resolution.« less
Star centroiding error compensation for intensified star sensors.
Jiang, Jie; Xiong, Kun; Yu, Wenbo; Yan, Jinyun; Zhang, Guangjun
2016-12-26
A star sensor provides high-precision attitude information by capturing a stellar image; however, the traditional star sensor has poor dynamic performance, which is attributed to its low sensitivity. Regarding the intensified star sensor, the image intensifier is utilized to improve the sensitivity, thereby further improving the dynamic performance of the star sensor. However, the introduction of image intensifier results in star centroiding accuracy decrease, further influencing the attitude measurement precision of the star sensor. A star centroiding error compensation method for intensified star sensors is proposed in this paper to reduce the influences. First, the imaging model of the intensified detector, which includes the deformation parameter of the optical fiber panel, is established based on the orthographic projection through the analysis of errors introduced by the image intensifier. Thereafter, the position errors at the target points based on the model are obtained by using the Levenberg-Marquardt (LM) optimization method. Last, the nearest trigonometric interpolation method is presented to compensate for the arbitrary centroiding error of the image plane. Laboratory calibration result and night sky experiment result show that the compensation method effectively eliminates the error introduced by the image intensifier, thus remarkably improving the precision of the intensified star sensors.
Method for hyperspectral imagery exploitation and pixel spectral unmixing
NASA Technical Reports Server (NTRS)
Lin, Ching-Fang (Inventor)
2003-01-01
An efficiently hybrid approach to exploit hyperspectral imagery and unmix spectral pixels. This hybrid approach uses a genetic algorithm to solve the abundance vector for the first pixel of a hyperspectral image cube. This abundance vector is used as initial state in a robust filter to derive the abundance estimate for the next pixel. By using Kalman filter, the abundance estimate for a pixel can be obtained in one iteration procedure which is much fast than genetic algorithm. The output of the robust filter is fed to genetic algorithm again to derive accurate abundance estimate for the current pixel. The using of robust filter solution as starting point of the genetic algorithm speeds up the evolution of the genetic algorithm. After obtaining the accurate abundance estimate, the procedure goes to next pixel, and uses the output of genetic algorithm as the previous state estimate to derive abundance estimate for this pixel using robust filter. And again use the genetic algorithm to derive accurate abundance estimate efficiently based on the robust filter solution. This iteration continues until pixels in a hyperspectral image cube end.
3-Phenyl-6-(2-pyrid-yl)-1,2,4,5-tetra-zine.
Chartrand, Daniel; Laverdière, François; Hanan, Garry
2007-12-06
The title compound, C(13)H(9)N(5), is the first asymmetric diaryl-1,2,4,5-tetra-zine to be crystallographically characterized. We have been inter-ested in this motif for incorporation into supra-molecular assemblies based on coordination chemistry. The solid state structure shows a centrosymmetric mol-ecule, forcing a positional disorder of the terminal phenyl and pyridyl rings. The mol-ecule is completely planar, unusual for aromatic rings with N atoms in adjacent ortho positions. The stacking observed is very common in diaryl-tetra-zines and is dominated by π stacking [centroid-to-centroid distance between the tetrazine ring and the aromatic ring of an adjacent molecule is 3.6 Å, perpendicular (centroid-to-plane) distance of about 3.3 Å].
Algorithms for Brownian first-passage-time estimation
NASA Astrophysics Data System (ADS)
Adib, Artur B.
2009-09-01
A class of algorithms in discrete space and continuous time for Brownian first-passage-time estimation is considered. A simple algorithm is derived that yields exact mean first-passage times (MFPTs) for linear potentials in one dimension, regardless of the lattice spacing. When applied to nonlinear potentials and/or higher spatial dimensions, numerical evidence suggests that this algorithm yields MFPT estimates that either outperform or rival Langevin-based (discrete time and continuous space) estimates.
A fast and accurate frequency estimation algorithm for sinusoidal signal with harmonic components
NASA Astrophysics Data System (ADS)
Hu, Jinghua; Pan, Mengchun; Zeng, Zhidun; Hu, Jiafei; Chen, Dixiang; Tian, Wugang; Zhao, Jianqiang; Du, Qingfa
2016-10-01
Frequency estimation is a fundamental problem in many applications, such as traditional vibration measurement, power system supervision, and microelectromechanical system sensors control. In this paper, a fast and accurate frequency estimation algorithm is proposed to deal with low efficiency problem in traditional methods. The proposed algorithm consists of coarse and fine frequency estimation steps, and we demonstrate that it is more efficient than conventional searching methods to achieve coarse frequency estimation (location peak of FFT amplitude) by applying modified zero-crossing technique. Thus, the proposed estimation algorithm requires less hardware and software sources and can achieve even higher efficiency when the experimental data increase. Experimental results with modulated magnetic signal show that the root mean square error of frequency estimation is below 0.032 Hz with the proposed algorithm, which has lower computational complexity and better global performance than conventional frequency estimation methods.
NASA Astrophysics Data System (ADS)
Ren, Silin; Jin, Xiao; Chan, Chung; Jian, Yiqiang; Mulnix, Tim; Liu, Chi; E Carson, Richard
2017-06-01
Data-driven respiratory gating techniques were developed to correct for respiratory motion in PET studies, without the help of external motion tracking systems. Due to the greatly increased image noise in gated reconstructions, it is desirable to develop a data-driven event-by-event respiratory motion correction method. In this study, using the Centroid-of-distribution (COD) algorithm, we established a data-driven event-by-event respiratory motion correction technique using TOF PET list-mode data, and investigated its performance by comparing with an external system-based correction method. Ten human scans with the pancreatic β-cell tracer 18F-FP-(+)-DTBZ were employed. Data-driven respiratory motions in superior-inferior (SI) and anterior-posterior (AP) directions were first determined by computing the centroid of all radioactive events during each short time frame with further processing. The Anzai belt system was employed to record respiratory motion in all studies. COD traces in both SI and AP directions were first compared with Anzai traces by computing the Pearson correlation coefficients. Then, respiratory gated reconstructions based on either COD or Anzai traces were performed to evaluate their relative performance in capturing respiratory motion. Finally, based on correlations of displacements of organ locations in all directions and COD information, continuous 3D internal organ motion in SI and AP directions was calculated based on COD traces to guide event-by-event respiratory motion correction in the MOLAR reconstruction framework. Continuous respiratory correction results based on COD were compared with that based on Anzai, and without motion correction. Data-driven COD traces showed a good correlation with Anzai in both SI and AP directions for the majority of studies, with correlation coefficients ranging from 63% to 89%. Based on the determined respiratory displacements of pancreas between end-expiration and end-inspiration from gated reconstructions, there was no significant difference between COD-based and Anzai-based methods. Finally, data-driven COD-based event-by-event respiratory motion correction yielded comparable results to that based on Anzai respiratory traces, in terms of contrast recovery and reduced motion-induced blur. Data-driven event-by-event respiratory motion correction using COD showed significant image quality improvement compared with reconstructions with no motion correction, and gave comparable results to the Anzai-based method.
Ren, Silin; Jin, Xiao; Chan, Chung; Jian, Yiqiang; Mulnix, Tim; Liu, Chi; Carson, Richard E
2017-06-21
Data-driven respiratory gating techniques were developed to correct for respiratory motion in PET studies, without the help of external motion tracking systems. Due to the greatly increased image noise in gated reconstructions, it is desirable to develop a data-driven event-by-event respiratory motion correction method. In this study, using the Centroid-of-distribution (COD) algorithm, we established a data-driven event-by-event respiratory motion correction technique using TOF PET list-mode data, and investigated its performance by comparing with an external system-based correction method. Ten human scans with the pancreatic β-cell tracer 18 F-FP-(+)-DTBZ were employed. Data-driven respiratory motions in superior-inferior (SI) and anterior-posterior (AP) directions were first determined by computing the centroid of all radioactive events during each short time frame with further processing. The Anzai belt system was employed to record respiratory motion in all studies. COD traces in both SI and AP directions were first compared with Anzai traces by computing the Pearson correlation coefficients. Then, respiratory gated reconstructions based on either COD or Anzai traces were performed to evaluate their relative performance in capturing respiratory motion. Finally, based on correlations of displacements of organ locations in all directions and COD information, continuous 3D internal organ motion in SI and AP directions was calculated based on COD traces to guide event-by-event respiratory motion correction in the MOLAR reconstruction framework. Continuous respiratory correction results based on COD were compared with that based on Anzai, and without motion correction. Data-driven COD traces showed a good correlation with Anzai in both SI and AP directions for the majority of studies, with correlation coefficients ranging from 63% to 89%. Based on the determined respiratory displacements of pancreas between end-expiration and end-inspiration from gated reconstructions, there was no significant difference between COD-based and Anzai-based methods. Finally, data-driven COD-based event-by-event respiratory motion correction yielded comparable results to that based on Anzai respiratory traces, in terms of contrast recovery and reduced motion-induced blur. Data-driven event-by-event respiratory motion correction using COD showed significant image quality improvement compared with reconstructions with no motion correction, and gave comparable results to the Anzai-based method.
An Efficient Distributed Compressed Sensing Algorithm for Decentralized Sensor Network.
Liu, Jing; Huang, Kaiyu; Zhang, Guoxian
2017-04-20
We consider the joint sparsity Model 1 (JSM-1) in a decentralized scenario, where a number of sensors are connected through a network and there is no fusion center. A novel algorithm, named distributed compact sensing matrix pursuit (DCSMP), is proposed to exploit the computational and communication capabilities of the sensor nodes. In contrast to the conventional distributed compressed sensing algorithms adopting a random sensing matrix, the proposed algorithm focuses on the deterministic sensing matrices built directly on the real acquisition systems. The proposed DCSMP algorithm can be divided into two independent parts, the common and innovation support set estimation processes. The goal of the common support set estimation process is to obtain an estimated common support set by fusing the candidate support set information from an individual node and its neighboring nodes. In the following innovation support set estimation process, the measurement vector is projected into a subspace that is perpendicular to the subspace spanned by the columns indexed by the estimated common support set, to remove the impact of the estimated common support set. We can then search the innovation support set using an orthogonal matching pursuit (OMP) algorithm based on the projected measurement vector and projected sensing matrix. In the proposed DCSMP algorithm, the process of estimating the common component/support set is decoupled with that of estimating the innovation component/support set. Thus, the inaccurately estimated common support set will have no impact on estimating the innovation support set. It is proven that under the condition the estimated common support set contains the true common support set, the proposed algorithm can find the true innovation set correctly. Moreover, since the innovation support set estimation process is independent of the common support set estimation process, there is no requirement for the cardinality of both sets; thus, the proposed DCSMP algorithm is capable of tackling the unknown sparsity problem successfully.
ERIC Educational Resources Information Center
Martin-Fernandez, Manuel; Revuelta, Javier
2017-01-01
This study compares the performance of two estimation algorithms of new usage, the Metropolis-Hastings Robins-Monro (MHRM) and the Hamiltonian MCMC (HMC), with two consolidated algorithms in the psychometric literature, the marginal likelihood via EM algorithm (MML-EM) and the Markov chain Monte Carlo (MCMC), in the estimation of multidimensional…
Coupled Inertial Navigation and Flush Air Data Sensing Algorithm for Atmosphere Estimation
NASA Technical Reports Server (NTRS)
Karlgaard, Christopher D.; Kutty, Prasad; Schoenenberger, Mark
2015-01-01
This paper describes an algorithm for atmospheric state estimation that is based on a coupling between inertial navigation and flush air data sensing pressure measurements. In this approach, the full navigation state is used in the atmospheric estimation algorithm along with the pressure measurements and a model of the surface pressure distribution to directly estimate atmospheric winds and density using a nonlinear weighted least-squares algorithm. The approach uses a high fidelity model of atmosphere stored in table-look-up form, along with simplified models of that are propagated along the trajectory within the algorithm to provide prior estimates and covariances to aid the air data state solution. Thus, the method is essentially a reduced-order Kalman filter in which the inertial states are taken from the navigation solution and atmospheric states are estimated in the filter. The algorithm is applied to data from the Mars Science Laboratory entry, descent, and landing from August 2012. Reasonable estimates of the atmosphere and winds are produced by the algorithm. The observability of winds along the trajectory are examined using an index based on the discrete-time observability Gramian and the pressure measurement sensitivity matrix. The results indicate that bank reversals are responsible for adding information content to the system. The algorithm is then applied to the design of the pressure measurement system for the Mars 2020 mission. The pressure port layout is optimized to maximize the observability of atmospheric states along the trajectory. Linear covariance analysis is performed to assess estimator performance for a given pressure measurement uncertainty. The results indicate that the new tightly-coupled estimator can produce enhanced estimates of atmospheric states when compared with existing algorithms.
Direction of Arrival Estimation Using a Reconfigurable Array
2005-05-06
civilian world. Keywords: Direction-of-arrival Estimation MUSIC algorithm Reconfigurable Array Experimental Created by Neevia Personal...14. SUBJECT TERMS: Direction-of-arrival ; Estimation ; MUSIC algorithm ; Reconfigurable ; Array ; Experimental 16. PRICE CODE 17...9 1.5 MuSiC Algorithm
Tuli, Richard; Surmak, Andrew; Reyes, Juvenal; Hacker-Prietz, Amy; Armour, Michael; Leubner, Ashley; Blackford, Amanda; Tryggestad, Erik; Jaffee, Elizabeth M; Wong, John; Deweese, Theodore L; Herman, Joseph M
2012-04-01
We report on a novel preclinical pancreatic cancer research model that uses bioluminescence imaging (BLI)-guided irradiation of orthotopic xenograft tumors, sparing of surrounding normal tissues, and quantitative, noninvasive longitudinal assessment of treatment response. Luciferase-expressing MiaPaCa-2 pancreatic carcinoma cells were orthotopically injected in nude mice. BLI was compared to pathologic tumor volume, and photon emission was assessed over time. BLI was correlated to positron emission tomography (PET)/computed tomography (CT) to estimate tumor dimensions. BLI and cone-beam CT (CBCT) were used to compare tumor centroid location and estimate setup error. BLI and CBCT fusion was performed to guide irradiation of tumors using the small animal radiation research platform (SARRP). DNA damage was assessed by γ-H2Ax staining. BLI was used to longitudinally monitor treatment response. Bioluminescence predicted tumor volume (R = 0.8984) and increased linearly as a function of time up to a 10-fold increase in tumor burden. BLI correlated with PET/CT and necropsy specimen in size (P < .05). Two-dimensional BLI centroid accuracy was 3.5 mm relative to CBCT. BLI-guided irradiated pancreatic tumors stained positively for γ-H2Ax, whereas surrounding normal tissues were spared. Longitudinal assessment of irradiated tumors with BLI revealed significant tumor growth delay of 20 days relative to controls. We have successfully applied the SARRP to a bioluminescent, orthotopic preclinical pancreas cancer model to noninvasively: 1) allow the identification of tumor burden before therapy, 2) facilitate image-guided focal radiation therapy, and 3) allow normalization of tumor burden and longitudinal assessment of treatment response.
Tuli, Richard; Surmak, Andrew; Reyes, Juvenal; Hacker-Prietz, Amy; Armour, Michael; Leubner, Ashley; Blackford, Amanda; Tryggestad, Erik; Jaffee, Elizabeth M; Wong, John; DeWeese, Theodore L; Herman, Joseph M
2012-01-01
PURPOSE: We report on a novel preclinical pancreatic cancer research model that uses bioluminescence imaging (BLI)-guided irradiation of orthotopic xenograft tumors, sparing of surrounding normal tissues, and quantitative, noninvasive longitudinal assessment of treatment response. MATERIALS AND METHODS: Luciferase-expressing MiaPaCa-2 pancreatic carcinoma cells were orthotopically injected in nude mice. BLI was compared to pathologic tumor volume, and photon emission was assessed over time. BLI was correlated to positron emission tomography (PET)/computed tomography (CT) to estimate tumor dimensions. BLI and cone-beam CT (CBCT) were used to compare tumor centroid location and estimate setup error. BLI and CBCT fusion was performed to guide irradiation of tumors using the small animal radiation research platform (SARRP). DNA damage was assessed by γ-H2Ax staining. BLI was used to longitudinally monitor treatment response. RESULTS: Bioluminescence predicted tumor volume (R = 0.8984) and increased linearly as a function of time up to a 10-fold increase in tumor burden. BLI correlated with PET/CT and necropsy specimen in size (P < .05). Two-dimensional BLI centroid accuracy was 3.5 mm relative to CBCT. BLI-guided irradiated pancreatic tumors stained positively for γ-H2Ax, whereas surrounding normal tissues were spared. Longitudinal assessment of irradiated tumors with BLI revealed significant tumor growth delay of 20 days relative to controls. CONCLUSIONS: We have successfully applied the SARRP to a bioluminescent, orthotopic preclinical pancreas cancer model to noninvasively: 1) allow the identification of tumor burden before therapy, 2) facilitate image-guided focal radiation therapy, and 3) allow normalization of tumor burden and longitudinal assessment of treatment response. PMID:22496923
NASA Astrophysics Data System (ADS)
Yao, Yunjun; Liang, Shunlin; Yu, Jian; Zhao, Shaohua; Lin, Yi; Jia, Kun; Zhang, Xiaotong; Cheng, Jie; Xie, Xianhong; Sun, Liang; Wang, Xuanyu; Zhang, Lilin
2017-04-01
Accurate estimates of terrestrial latent heat of evaporation (LE) for different biomes are essential to assess energy, water and carbon cycles. Different satellite- based Priestley-Taylor (PT) algorithms have been developed to estimate LE in different biomes. However, there are still large uncertainties in LE estimates for different PT algorithms. In this study, we evaluated differences in estimating terrestrial water flux in different biomes from three satellite-based PT algorithms using ground-observed data from eight eddy covariance (EC) flux towers of China. The results reveal that large differences in daily LE estimates exist based on EC measurements using three PT algorithms among eight ecosystem types. At the forest (CBS) site, all algorithms demonstrate high performance with low root mean square error (RMSE) (less than 16 W/m2) and high squared correlation coefficient (R2) (more than 0.9). At the village (HHV) site, the ATI-PT algorithm has the lowest RMSE (13.9 W/m2), with bias of 2.7 W/m2 and R2 of 0.66. At the irrigated crop (HHM) site, almost all models algorithms underestimate LE, indicating these algorithms may not capture wet soil evaporation by parameterization of the soil moisture. In contrast, the SM-PT algorithm shows high values of R2 (comparable to those of ATI-PT and VPD-PT) at most other (grass, wetland, desert and Gobi) biomes. There are no obvious differences in seasonal LE estimation using MODIS NDVI and LAI at most sites. However, all meteorological or satellite-based water-related parameters used in the PT algorithm have uncertainties for optimizing water constraints. This analysis highlights the need to improve PT algorithms with regard to water constraints.
Analysis of estimation algorithms for CDTI and CAS applications
NASA Technical Reports Server (NTRS)
Goka, T.
1985-01-01
Estimation algorithms for Cockpit Display of Traffic Information (CDTI) and Collision Avoidance System (CAS) applications were analyzed and/or developed. The algorithms are based on actual or projected operational and performance characteristics of an Enhanced TCAS II traffic sensor developed by Bendix and the Federal Aviation Administration. Three algorithm areas are examined and discussed. These are horizontal x and y, range and altitude estimation algorithms. Raw estimation errors are quantified using Monte Carlo simulations developed for each application; the raw errors are then used to infer impacts on the CDTI and CAS applications. Applications of smoothing algorithms to CDTI problems are also discussed briefly. Technical conclusions are summarized based on the analysis of simulation results.
Wynant, Willy; Abrahamowicz, Michal
2016-11-01
Standard optimization algorithms for maximizing likelihood may not be applicable to the estimation of those flexible multivariable models that are nonlinear in their parameters. For applications where the model's structure permits separating estimation of mutually exclusive subsets of parameters into distinct steps, we propose the alternating conditional estimation (ACE) algorithm. We validate the algorithm, in simulations, for estimation of two flexible extensions of Cox's proportional hazards model where the standard maximum partial likelihood estimation does not apply, with simultaneous modeling of (1) nonlinear and time-dependent effects of continuous covariates on the hazard, and (2) nonlinear interaction and main effects of the same variable. We also apply the algorithm in real-life analyses to estimate nonlinear and time-dependent effects of prognostic factors for mortality in colon cancer. Analyses of both simulated and real-life data illustrate good statistical properties of the ACE algorithm and its ability to yield new potentially useful insights about the data structure. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Statistical and Biophysical Models for Predicting Total and Outdoor Water Use in Los Angeles
NASA Astrophysics Data System (ADS)
Mini, C.; Hogue, T. S.; Pincetl, S.
2012-04-01
Modeling water demand is a complex exercise in the choice of the functional form, techniques and variables to integrate in the model. The goal of the current research is to identify the determinants that control total and outdoor residential water use in semi-arid cities and to utilize that information in the development of statistical and biophysical models that can forecast spatial and temporal urban water use. The City of Los Angeles is unique in its highly diverse socio-demographic, economic and cultural characteristics across neighborhoods, which introduces significant challenges in modeling water use. Increasing climate variability also contributes to uncertainties in water use predictions in urban areas. Monthly individual water use records were acquired from the Los Angeles Department of Water and Power (LADWP) for the 2000 to 2010 period. Study predictors of residential water use include socio-demographic, economic, climate and landscaping variables at the zip code level collected from US Census database. Climate variables are estimated from ground-based observations and calculated at the centroid of each zip code by inverse-distance weighting method. Remotely-sensed products of vegetation biomass and landscape land cover are also utilized. Two linear regression models were developed based on the panel data and variables described: a pooled-OLS regression model and a linear mixed effects model. Both models show income per capita and the percentage of landscape areas in each zip code as being statistically significant predictors. The pooled-OLS model tends to over-estimate higher water use zip codes and both models provide similar RMSE values.Outdoor water use was estimated at the census tract level as the residual between total water use and indoor use. This residual is being compared with the output from a biophysical model including tree and grass cover areas, climate variables and estimates of evapotranspiration at very high spatial resolution. A genetic algorithm based model (Shuffled Complex Evolution-UA; SCE-UA) is also being developed to provide estimates of the predictions and parameters uncertainties and to compare against the linear regression models. Ultimately, models will be selected to undertake predictions for a range of climate change and landscape scenarios. Finally, project results will contribute to a better understanding of water demand to help predict future water use and implement targeted landscaping conservation programs to maintain sustainable water needs for a growing population under uncertain climate variability.
Automated Slicing for a Multi-Axis Metal Deposition System (Preprint)
2006-09-01
experimented with different materials like H13 tool steel to build the part. Following the same slicing and scanning toolpath result, there is a geometric...and analysis tool -centroidal axis. Similar to medial axis, it contains geometry and topological information but is significantly computationally...geometry reasoning and analysis tool -centroidal axis. Similar to medial axis, it contains geometry and topological information but is significantly
Video image position determination
Christensen, Wynn; Anderson, Forrest L.; Kortegaard, Birchard L.
1991-01-01
An optical beam position controller in which a video camera captures an image of the beam in its video frames, and conveys those images to a processing board which calculates the centroid coordinates for the image. The image coordinates are used by motor controllers and stepper motors to position the beam in a predetermined alignment. In one embodiment, system noise, used in conjunction with Bernoulli trials, yields higher resolution centroid coordinates.
NASA Astrophysics Data System (ADS)
Ferrarello, Daniela; Mammana, Maria Flavia; Pennisi, Mario
2018-05-01
In this paper, we show some properties of centroids of geometric figures, such as triangles, quadrilaterals and tetrahedra. In particular, we will prove the properties by means of geometric transformations and by introducing extensions of triangles and quadrilaterals, i.e. by adding one, two or three new vertices to the figure. The study of these properties can be used, with profit, in a classroom activity supported by a dynamic geometry system.
NASA Technical Reports Server (NTRS)
Phinney, D. E. (Principal Investigator)
1980-01-01
An algorithm for estimating spectral crop calendar shifts of spring small grains was applied to 1978 spring wheat fields. The algorithm provides estimates of the date of peak spectral response by maximizing the cross correlation between a reference profile and the observed multitemporal pattern of Kauth-Thomas greenness for a field. A methodology was developed for estimation of crop development stage from the date of peak spectral response. Evaluation studies showed that the algorithm provided stable estimates with no geographical bias. Crop development stage estimates had a root mean square error near 10 days. The algorithm was recommended for comparative testing against other models which are candidates for use in AgRISTARS experiments.
SU-F-J-109: Generate Synthetic CT From Cone Beam CT for CBCT-Based Dose Calculation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, H; Barbee, D; Wang, W
Purpose: The use of CBCT for dose calculation is limited by its HU inaccuracy from increased scatter. This study presents a method to generate synthetic CT images from CBCT data by a probabilistic classification that may be robust to CBCT noise. The feasibility of using the synthetic CT for dose calculation is evaluated in IMRT for unilateral H&N cancer. Methods: In the training phase, a fuzzy c-means classification was performed on HU vectors (CBCT, CT) of planning CT and registered day-1 CBCT image pair. Using the resulting centroid CBCT and CT values for five classified “tissue” types, a synthetic CTmore » for a daily CBCT was created by classifying each CBCT voxel to obtain its probability belonging to each tissue class, then assigning a CT HU with a probability-weighted summation of the classes’ CT centroids. Two synthetic CTs from a CBCT were generated: s-CT using the centroids from classification of individual patient CBCT/CT data; s2-CT using the same centroids for all patients to investigate the applicability of group-based centroids. IMRT dose calculations for five patients were performed on the synthetic CTs and compared with CT-planning doses by dose-volume statistics. Results: DVH curves of PTVs and critical organs calculated on s-CT and s2-CT agree with those from planning-CT within 3%, while doses calculated with heterogeneity off or on raw CBCT show DVH differences up to 15%. The differences in PTV D95% and spinal cord max are 0.6±0.6% and 0.6±0.3% for s-CT, and 1.6±1.7% and 1.9±1.7% for s2-CT. Gamma analysis (2%/2mm) shows 97.5±1.6% and 97.6±1.6% pass rates for using s-CTs and s2-CTs compared with CT-based doses, respectively. Conclusion: CBCT-synthesized CTs using individual or group-based centroids resulted in dose calculations that are comparable to CT-planning dose for unilateral H&N cancer. The method may provide a tool for accurate dose calculation based on daily CBCT.« less
Performance in population models for count data, part II: a new SAEM algorithm
Savic, Radojka; Lavielle, Marc
2009-01-01
Analysis of count data from clinical trials using mixed effect analysis has recently become widely used. However, algorithms available for the parameter estimation, including LAPLACE and Gaussian quadrature (GQ), are associated with certain limitations, including bias in parameter estimates and the long analysis runtime. The stochastic approximation expectation maximization (SAEM) algorithm has proven to be a very efficient and powerful tool in the analysis of continuous data. The aim of this study was to implement and investigate the performance of a new SAEM algorithm for application to count data. A new SAEM algorithm was implemented in MATLAB for estimation of both, parameters and the Fisher information matrix. Stochastic Monte Carlo simulations followed by re-estimation were performed according to scenarios used in previous studies (part I) to investigate properties of alternative algorithms (1). A single scenario was used to explore six probability distribution models. For parameter estimation, the relative bias was less than 0.92% and 4.13 % for fixed and random effects, for all models studied including ones accounting for over- or under-dispersion. Empirical and estimated relative standard errors were similar, with distance between them being <1.7 % for all explored scenarios. The longest CPU time was 95s for parameter estimation and 56s for SE estimation. The SAEM algorithm was extended for analysis of count data. It provides accurate estimates of both, parameters and standard errors. The estimation is significantly faster compared to LAPLACE and GQ. The algorithm is implemented in Monolix 3.1, (beta-version available in July 2009). PMID:19680795
Performance analysis of structured gradient algorithm. [for adaptive beamforming linear arrays
NASA Technical Reports Server (NTRS)
Godara, Lal C.
1990-01-01
The structured gradient algorithm uses a structured estimate of the array correlation matrix (ACM) to estimate the gradient required for the constrained least-mean-square (LMS) algorithm. This structure reflects the structure of the exact array correlation matrix for an equispaced linear array and is obtained by spatial averaging of the elements of the noisy correlation matrix. In its standard form the LMS algorithm does not exploit the structure of the array correlation matrix. The gradient is estimated by multiplying the array output with the receiver outputs. An analysis of the two algorithms is presented to show that the covariance of the gradient estimated by the structured method is less sensitive to the look direction signal than that estimated by the standard method. The effect of the number of elements on the signal sensitivity of the two algorithms is studied.
Spotting stellar activity cycles in Gaia astrometry
NASA Astrophysics Data System (ADS)
Morris, Brett M.; Agol, Eric; Davenport, James R. A.; Hawley, Suzanne L.
2018-06-01
Astrometry from Gaia will measure the positions of stellar photometric centroids to unprecedented precision. We show that the precision of Gaia astrometry is sufficient to detect starspot-induced centroid jitter for nearby stars in the Tycho-Gaia Astrometric Solution (TGAS) sample with magnetic activity similar to the young G-star KIC 7174505 or the active M4 dwarf GJ 1243, but is insufficient to measure centroid jitter for stars with Sun-like spot distributions. We simulate Gaia observations of stars with 10 year activity cycles to search for evidence of activity cycles, and find that Gaia astrometry alone likely cannot detect activity cycles for stars in the TGAS sample, even if they have spot distributions like KIC 7174505. We review the activity of the nearby low-mass stars in the TGAS sample for which we anticipate significant detections of spot-induced jitter.
3-Phenyl-6-(2-pyridyl)-1,2,4,5-tetrazine
Chartrand, Daniel; Laverdière, François; Hanan, Garry
2008-01-01
The title compound, C13H9N5, is the first asymmetric diaryl-1,2,4,5-tetrazine to be crystallographically characterized. We have been interested in this motif for incorporation into supramolecular assemblies based on coordination chemistry. The solid state structure shows a centrosymmetric molecule, forcing a positional disorder of the terminal phenyl and pyridyl rings. The molecule is completely planar, unusual for aromatic rings with N atoms in adjacent ortho positions. The stacking observed is very common in diaryltetrazines and is dominated by π stacking [centroid-to-centroid distance between the tetrazine ring and the aromatic ring of an adjacent molecule is 3.6 Å, perpendicular (centroid-to-plane) distance of about 3.3 Å]. PMID:21200916
Douglas, David R [Newport News, VA; Benson, Stephen V [Yorktown, VA
2007-01-23
A method of energy recovery for RF-base linear charged particle accelerators that allows energy recovery without large relative momentum spread of the particle beam involving first accelerating a waveform particle beam having a crest and a centroid with an injection energy E.sub.o with the centroid of the particle beam at a phase offset f.sub.o from the crest of the accelerating waveform to an energy E.sub.full and then recovering the beam energy centroid a phase f.sub.o+Df relative to the crest of the waveform particle beam such that (E.sub.full-E.sub.o)(1+cos(f.sub.o+Df))>dE/2 wherein dE=the full energy spread, dE/2=the full energy half spread and Df=the wave form phase distance.
An algorithm for propagating the square-root covariance matrix in triangular form
NASA Technical Reports Server (NTRS)
Tapley, B. D.; Choe, C. Y.
1976-01-01
A method for propagating the square root of the state error covariance matrix in lower triangular form is described. The algorithm can be combined with any triangular square-root measurement update algorithm to obtain a triangular square-root sequential estimation algorithm. The triangular square-root algorithm compares favorably with the conventional sequential estimation algorithm with regard to computation time.
Frequency-domain beamformers using conjugate gradient techniques for speech enhancement.
Zhao, Shengkui; Jones, Douglas L; Khoo, Suiyang; Man, Zhihong
2014-09-01
A multiple-iteration constrained conjugate gradient (MICCG) algorithm and a single-iteration constrained conjugate gradient (SICCG) algorithm are proposed to realize the widely used frequency-domain minimum-variance-distortionless-response (MVDR) beamformers and the resulting algorithms are applied to speech enhancement. The algorithms are derived based on the Lagrange method and the conjugate gradient techniques. The implementations of the algorithms avoid any form of explicit or implicit autocorrelation matrix inversion. Theoretical analysis establishes formal convergence of the algorithms. Specifically, the MICCG algorithm is developed based on a block adaptation approach and it generates a finite sequence of estimates that converge to the MVDR solution. For limited data records, the estimates of the MICCG algorithm are better than the conventional estimators and equivalent to the auxiliary vector algorithms. The SICCG algorithm is developed based on a continuous adaptation approach with a sample-by-sample updating procedure and the estimates asymptotically converge to the MVDR solution. An illustrative example using synthetic data from a uniform linear array is studied and an evaluation on real data recorded by an acoustic vector sensor array is demonstrated. Performance of the MICCG algorithm and the SICCG algorithm are compared with the state-of-the-art approaches.
Ogawa, Takahiro; Haseyama, Miki
2013-03-01
A missing texture reconstruction method based on an error reduction (ER) algorithm, including a novel estimation scheme of Fourier transform magnitudes is presented in this brief. In our method, Fourier transform magnitude is estimated for a target patch including missing areas, and the missing intensities are estimated by retrieving its phase based on the ER algorithm. Specifically, by monitoring errors converged in the ER algorithm, known patches whose Fourier transform magnitudes are similar to that of the target patch are selected from the target image. In the second approach, the Fourier transform magnitude of the target patch is estimated from those of the selected known patches and their corresponding errors. Consequently, by using the ER algorithm, we can estimate both the Fourier transform magnitudes and phases to reconstruct the missing areas.
Distributed parameter statics of magnetic catheters.
Tunay, Ilker
2011-01-01
We discuss how to use special Cosserat rod theory for deriving distributed-parameter static equilibrium equations of magnetic catheters. These medical devices are used for minimally-invasive diagnostic and therapeutic procedures and can be operated remotely or controlled by automated algorithms. The magnetic material can be lumped in rigid segments or distributed in flexible segments. The position vector of the cross-section centroid and quaternion representation of an orthonormal triad are selected as DOF. The strain energy for transversely isotropic, hyperelastic rods is augmented with the mechanical potential energy of the magnetic field and a penalty term to enforce the quaternion unity constraint. Numerical solution is found by 1D finite elements. Material properties of polymer tubes in extension, bending and twist are determined by mechanical and magnetic experiments. Software experiments with commercial FEM software indicate that the computational effort with the proposed method is at least one order of magnitude less than standard 3D FEM.
Laser transit anemometer software development program
NASA Technical Reports Server (NTRS)
Abbiss, John B.
1989-01-01
Algorithms were developed for the extraction of two components of mean velocity, standard deviation, and the associated correlation coefficient from laser transit anemometry (LTA) data ensembles. The solution method is based on an assumed two-dimensional Gaussian probability density function (PDF) model of the flow field under investigation. The procedure consists of transforming the data ensembles from the data acquisition domain (consisting of time and angle information) to the velocity space domain (consisting of velocity component information). The mean velocity results are obtained from the data ensemble centroid. Through a least squares fitting of the transformed data to an ellipse representing the intersection of a plane with the PDF, the standard deviations and correlation coefficient are obtained. A data set simulation method is presented to test the data reduction process. Results of using the simulation system with a limited test matrix of input values is also given.
A layered modulation method for pixel matching in online phase measuring profilometry
NASA Astrophysics Data System (ADS)
Li, Hongru; Feng, Guoying; Bourgade, Thomas; Yang, Peng; Zhou, Shouhuan; Asundi, Anand
2016-10-01
An online phase measuring profilometry with new layered modulation method for pixel matching is presented. In this method and in contrast with previous modulation matching methods, the captured images are enhanced by Retinex theory for better modulation distribution, and all different layer modulation masks are fully used to determine the displacement of a rectilinear moving object. High, medium and low modulation masks are obtained by performing binary segmentation with iterative Otsu method. The final shifting pixels are calculated based on centroid concept, and after that the aligned fringe patterns can be extracted from each frame. After performing Stoilov algorithm and a series of subsequent operations, the object profile on a translation stage is reconstructed. All procedures are carried out automatically, without setting specific parameters in advance. Numerical simulations are detailed and experimental results verify the validity and feasibility of the proposed approach.
A two-dimensional intensified photodiode array for imaging spectroscopy
NASA Technical Reports Server (NTRS)
Tennyson, P. D.; Dymond, K.; Moos, H. W.; Feldman, P. D.; Mackey, E. F.
1986-01-01
The Johns Hopkins University is currently developing an instrument to fly aboard NASA's Space Shuttle as a Spartan payload in the late 1980s. This Spartan free flyer will obtain spatially resolved spectra of faint extended emission line objects in the wavelength range 750-1150 A at about 2-A resolution. The use of two-dimensional photon counting detectors will give simultaneous coverage of the 400 A spectral range and the 9 arc-minute spatial resolution along the spectrometer slit. The progress towards the flight detector is reported here with preliminary results from a laboratory breadboard detector, and a comparison with the one-dimensional detector developed for the Hopkins Ultraviolet Telescope. A hardware digital centroiding algorithm has been successfully implemented. The system is ultimately capable of 15-micron resolution in two dimensions at the image plane and can handle continuous counting rates of up to 8000 counts/s.
NASA Astrophysics Data System (ADS)
Yang, Hongxin; Su, Fulin
2018-01-01
We propose a moving target analysis algorithm using speeded-up robust features (SURF) and regular moment in inverse synthetic aperture radar (ISAR) image sequences. In our study, we first extract interest points from ISAR image sequences by SURF. Different from traditional feature point extraction methods, SURF-based feature points are invariant to scattering intensity, target rotation, and image size. Then, we employ a bilateral feature registering model to match these feature points. The feature registering scheme can not only search the isotropic feature points to link the image sequences but also reduce the error matching pairs. After that, the target centroid is detected by regular moment. Consequently, a cost function based on correlation coefficient is adopted to analyze the motion information. Experimental results based on simulated and real data validate the effectiveness and practicability of the proposed method.
Pilot-based parametric channel estimation algorithm for DCO-OFDM-based visual light communications
NASA Astrophysics Data System (ADS)
Qian, Xuewen; Deng, Honggui; He, Hailang
2017-10-01
Due to wide modulation bandwidth in optical communication, multipath channels may be non-sparse and deteriorate communication performance heavily. Traditional compressive sensing-based channel estimation algorithm cannot be employed in this kind of situation. In this paper, we propose a practical parametric channel estimation algorithm for orthogonal frequency division multiplexing (OFDM)-based visual light communication (VLC) systems based on modified zero correlation code (ZCC) pair that has the impulse-like correlation property. Simulation results show that the proposed algorithm achieves better performances than existing least squares (LS)-based algorithm in both bit error ratio (BER) and frequency response estimation.
Development of advanced techniques for rotorcraft state estimation and parameter identification
NASA Technical Reports Server (NTRS)
Hall, W. E., Jr.; Bohn, J. G.; Vincent, J. H.
1980-01-01
An integrated methodology for rotorcraft system identification consists of rotorcraft mathematical modeling, three distinct data processing steps, and a technique for designing inputs to improve the identifiability of the data. These elements are as follows: (1) a Kalman filter smoother algorithm which estimates states and sensor errors from error corrupted data. Gust time histories and statistics may also be estimated; (2) a model structure estimation algorithm for isolating a model which adequately explains the data; (3) a maximum likelihood algorithm for estimating the parameters and estimates for the variance of these estimates; and (4) an input design algorithm, based on a maximum likelihood approach, which provides inputs to improve the accuracy of parameter estimates. Each step is discussed with examples to both flight and simulated data cases.
Ipsen, Andreas
2017-02-03
Here, the mass peak centroid is a quantity that is at the core of mass spectrometry (MS). However, despite its central status in the field, models of its statistical distribution are often chosen quite arbitrarily and without attempts at establishing a proper theoretical justification for their use. Recent work has demonstrated that for mass spectrometers employing analog-to-digital converters (ADCs) and electron multipliers, the statistical distribution of the mass peak intensity can be described via a relatively simple model derived essentially from first principles. Building on this result, the following article derives the corresponding statistical distribution for the mass peak centroidsmore » of such instruments. It is found that for increasing signal strength, the centroid distribution converges to a Gaussian distribution whose mean and variance are determined by physically meaningful parameters and which in turn determine bias and variability of the m/z measurements of the instrument. Through the introduction of the concept of “pulse-peak correlation”, the model also elucidates the complicated relationship between the shape of the voltage pulses produced by the preamplifier and the mean and variance of the centroid distribution. The predictions of the model are validated with empirical data and with Monte Carlo simulations.« less
Structure and seasonal variations of the nocturnal mesospheric K layer at Arecibo
NASA Astrophysics Data System (ADS)
Yue, Xianchang; Friedman, Jonathan S.; Wu, Xiongbin; Zhou, Qihou H.
2017-07-01
We present the seasonal variations of the nocturnal mesospheric potassium (K) layer at Arecibo, Puerto Rico (18.35°N, 66.75°W) from 160 nights of K Doppler lidar observations between December 2003 and January 2010, during which the solar activity is mostly low. The background temperature is also measured simultaneously by the lidar and shows a strong semiannual oscillation with maxima occurring during equinoxes at all altitudes. The annual mean K density profile is approximately Gaussian with a peak altitude of 91.7 km. The K column abundance and the centroid height have strong semiannual variations, with maxima at the solstices. Both parameters are negatively correlated to the mean background temperature with a correlation coefficient < -0.5. The root-mean-square (RMS) width has a distinct annual oscillation with the largest width occurring in May. The seasonal variation of the centroid height is similar to that of the Fe layer at the same site. The seasonal temperature variation indicates significant enhanced wave-induced downward transport for both species during spring and autumn. This explains the metal layer centroid height and column abundance variations at Arecibo and provides a general mechanism to account for the seasonal variations in the centroid height of all metal species measured at low-latitude and midlatitude sites.
Mobilio, Dominick; Walker, Gary; Brooijmans, Natasja; Nilakantan, Ramaswamy; Denny, R Aldrin; Dejoannis, Jason; Feyfant, Eric; Kowticwar, Rupesh K; Mankala, Jyoti; Palli, Satish; Punyamantula, Sairam; Tatipally, Maneesh; John, Reji K; Humblet, Christine
2010-08-01
The Protein Data Bank is the most comprehensive source of experimental macromolecular structures. It can, however, be difficult at times to locate relevant structures with the Protein Data Bank search interface. This is particularly true when searching for complexes containing specific interactions between protein and ligand atoms. Moreover, searching within a family of proteins can be tedious. For example, one cannot search for some conserved residue as residue numbers vary across structures. We describe herein three databases, Protein Relational Database, Kinase Knowledge Base, and Matrix Metalloproteinase Knowledge Base, containing protein structures from the Protein Data Bank. In Protein Relational Database, atom-atom distances between protein and ligand have been precalculated allowing for millisecond retrieval based on atom identity and distance constraints. Ring centroids, centroid-centroid and centroid-atom distances and angles have also been included permitting queries for pi-stacking interactions and other structural motifs involving rings. Other geometric features can be searched through the inclusion of residue pair and triplet distances. In Kinase Knowledge Base and Matrix Metalloproteinase Knowledge Base, the catalytic domains have been aligned into common residue numbering schemes. Thus, by searching across Protein Relational Database and Kinase Knowledge Base, one can easily retrieve structures wherein, for example, a ligand of interest is making contact with the gatekeeper residue.
Statistical Properties of Line Centroid Velocity Increments in the rho Ophiuchi Cloud
NASA Technical Reports Server (NTRS)
Lis, D. C.; Keene, Jocelyn; Li, Y.; Phillips, T. G.; Pety, J.
1998-01-01
We present a comparison of histograms of CO (2-1) line centroid velocity increments in the rho Ophiuchi molecular cloud with those computed for spectra synthesized from a three-dimensional, compressible, but non-starforming and non-gravitating hydrodynamic simulation. Histograms of centroid velocity increments in the rho Ophiuchi cloud show clearly non-Gaussian wings, similar to those found in histograms of velocity increments and derivatives in experimental studies of laboratory and atmospheric flows, as well as numerical simulations of turbulence. The magnitude of these wings increases monotonically with decreasing separation, down to the angular resolution of the data. This behavior is consistent with that found in the phase of the simulation which has most of the properties of incompressible turbulence. The time evolution of the magnitude of the non-Gaussian wings in the histograms of centroid velocity increments in the simulation is consistent with the evolution of the vorticity in the flow. However, we cannot exclude the possibility that the wings are associated with the shock interaction regions. Moreover, in an active starforming region like the rho Ophiuchi cloud, the effects of shocks may be more important than in the simulation. However, being able to identify shock interaction regions in the interstellar medium is also important, since numerical simulations show that vorticity is generated in shock interactions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ipsen, Andreas
Here, the mass peak centroid is a quantity that is at the core of mass spectrometry (MS). However, despite its central status in the field, models of its statistical distribution are often chosen quite arbitrarily and without attempts at establishing a proper theoretical justification for their use. Recent work has demonstrated that for mass spectrometers employing analog-to-digital converters (ADCs) and electron multipliers, the statistical distribution of the mass peak intensity can be described via a relatively simple model derived essentially from first principles. Building on this result, the following article derives the corresponding statistical distribution for the mass peak centroidsmore » of such instruments. It is found that for increasing signal strength, the centroid distribution converges to a Gaussian distribution whose mean and variance are determined by physically meaningful parameters and which in turn determine bias and variability of the m/z measurements of the instrument. Through the introduction of the concept of “pulse-peak correlation”, the model also elucidates the complicated relationship between the shape of the voltage pulses produced by the preamplifier and the mean and variance of the centroid distribution. The predictions of the model are validated with empirical data and with Monte Carlo simulations.« less
Detecting Anomalies in Process Control Networks
NASA Astrophysics Data System (ADS)
Rrushi, Julian; Kang, Kyoung-Don
This paper presents the estimation-inspection algorithm, a statistical algorithm for anomaly detection in process control networks. The algorithm determines if the payload of a network packet that is about to be processed by a control system is normal or abnormal based on the effect that the packet will have on a variable stored in control system memory. The estimation part of the algorithm uses logistic regression integrated with maximum likelihood estimation in an inductive machine learning process to estimate a series of statistical parameters; these parameters are used in conjunction with logistic regression formulas to form a probability mass function for each variable stored in control system memory. The inspection part of the algorithm uses the probability mass functions to estimate the normalcy probability of a specific value that a network packet writes to a variable. Experimental results demonstrate that the algorithm is very effective at detecting anomalies in process control networks.
Robust image modeling techniques with an image restoration application
NASA Astrophysics Data System (ADS)
Kashyap, Rangasami L.; Eom, Kie-Bum
1988-08-01
A robust parameter-estimation algorithm for a nonsymmetric half-plane (NSHP) autoregressive model, where the driving noise is a mixture of a Gaussian and an outlier process, is presented. The convergence of the estimation algorithm is proved. An algorithm to estimate parameters and original image intensity simultaneously from the impulse-noise-corrupted image, where the model governing the image is not available, is also presented. The robustness of the parameter estimates is demonstrated by simulation. Finally, an algorithm to restore realistic images is presented. The entire image generally does not obey a simple image model, but a small portion (e.g., 8 x 8) of the image is assumed to obey an NSHP model. The original image is divided into windows and the robust estimation algorithm is applied for each window. The restoration algorithm is tested by comparing it to traditional methods on several different images.
A hybrid method for accurate star tracking using star sensor and gyros.
Lu, Jiazhen; Yang, Lie; Zhang, Hao
2017-10-01
Star tracking is the primary operating mode of star sensors. To improve tracking accuracy and efficiency, a hybrid method using a star sensor and gyroscopes is proposed in this study. In this method, the dynamic conditions of an aircraft are determined first by the estimated angular acceleration. Under low dynamic conditions, the star sensor is used to measure the star vector and the vector difference method is adopted to estimate the current angular velocity. Under high dynamic conditions, the angular velocity is obtained by the calibrated gyros. The star position is predicted based on the estimated angular velocity and calibrated gyros using the star vector measurements. The results of the semi-physical experiment show that this hybrid method is accurate and feasible. In contrast with the star vector difference and gyro-assisted methods, the star position prediction result of the hybrid method is verified to be more accurate in two different cases under the given random noise of the star centroid.
Whiteman, Aroscott
2012-01-01
In 2010, the U.S. Geological Survey, in cooperation with the Montana Department of Environmental Quality, initiated a dye-tracer study to determine travel times, streamflow velocities, and longitudinal dispersion rates for the Missouri River upstream from Canyon Ferry Lake. For this study, rhodamine WT (RWT) dye was injected at two locations, Missouri River Headwaters State Park in early September and Broadwater-Missouri Dam (Broadwater Dam) in late August 2010. Dye concentrations were measured at three sites downstream from each dye-injection location. The study area was a 41.2-mile reach of the Missouri River from Trident, Montana, at the confluence of the Jefferson, Madison, and Gallatin Rivers (Missouri River Headwaters) at river mile 2,319.40 downstream to the U.S. Route 12 Bridge (Townsend Bridge), river mile 2,278.23, near Townsend, Montana. Streamflows were reasonably steady and ranged from 3,070 to 3,700 cubic feet per second. Mean velocities were calculated for each subreach between measurement sites for the leading edge, peak concentration, centroid, and trailing edge at 10 percent of the peak concentration of the dye plume. Calculated velocities for the centroid of the dye plume ranged from 0.80 to 3.02 feet per second within the study reach from Missouri River Headwaters to Townsend Bridge, near Townsend. The mean velocity of the dye plume for the entire study reach, excluding the subreach between the abandoned Milwaukee Railroad bridge at Lombard, Montana (Milwaukee Bridge) and Broadwater-Missouri Dam (Broadwater Dam), was 2.87 feet per second. The velocity of the centroid of the dye plume for the subreach between Milwaukee Bridge and Broadwater Dam (Toston Reservoir) was 0.80 feet per second. The residence time for Toston Reservoir was 8.2 hours during this study. Estimated longitudinal dispersion rates of the dye plume for this study ranged from 0.72 feet per second for the subreach from Milwaukee Bridge to Broadwater Dam to 2.26 feet per second for the subreach from the U.S. Route 287 Bypass Bridge over the Missouri River north of Toston, Montana to Yorks Islands. A relation was determined between travel time of the peak concentration and the time for the dye plume to pass a site (duration of the dye plume). This relation can be used to estimate when the decreasing concentration of a potential contaminant is reduced to 10 percent of its peak concentration for accidental (contaminant or chemical) spills into the upper Missouri River.
An interpretation of the narrow positron annihilation feature from X-ray nova Muscae 1991
NASA Technical Reports Server (NTRS)
Chen, Wan; Gehrels, Neil; Cheng, F. H.
1993-01-01
The physical mechanism responsible for the narrow redshifted positron annihilation gamma-ray line from the X-ray nova Muscae 1991 is studied. The orbital inclination angle of the system is estimated and its black hole mass is constrained under the assumptions that the annihilation line centroid redshift is purely gravitational and that the line width is due to the combined effect of temperature broadening and disk rotation. The large black hole mass lower limit of 8 solar and the high binary mass ratio it implies raise a serious challenge to theoretical models of the formation and evolution of massive binaries.
ERIC Educational Resources Information Center
Yang, Ji Seung; Cai, Li
2014-01-01
The main purpose of this study is to improve estimation efficiency in obtaining maximum marginal likelihood estimates of contextual effects in the framework of nonlinear multilevel latent variable model by adopting the Metropolis-Hastings Robbins-Monro algorithm (MH-RM). Results indicate that the MH-RM algorithm can produce estimates and standard…
NASA Technical Reports Server (NTRS)
Yang, Song; Olson, William S.; Wang, Jian-Jian; Bell, Thomas L.; Smith, Eric A.; Kummerow, Christian D.
2006-01-01
Rainfall rate estimates from spaceborne microwave radiometers are generally accepted as reliable by a majority of the atmospheric science community. One of the Tropical Rainfall Measuring Mission (TRMM) facility rain-rate algorithms is based upon passive microwave observations from the TRMM Microwave Imager (TMI). In Part I of this series, improvements of the TMI algorithm that are required to introduce latent heating as an additional algorithm product are described. Here, estimates of surface rain rate, convective proportion, and latent heating are evaluated using independent ground-based estimates and satellite products. Instantaneous, 0.5 deg. -resolution estimates of surface rain rate over ocean from the improved TMI algorithm are well correlated with independent radar estimates (r approx. 0.88 over the Tropics), but bias reduction is the most significant improvement over earlier algorithms. The bias reduction is attributed to the greater breadth of cloud-resolving model simulations that support the improved algorithm and the more consistent and specific convective/stratiform rain separation method utilized. The bias of monthly 2.5 -resolution estimates is similarly reduced, with comparable correlations to radar estimates. Although the amount of independent latent heating data is limited, TMI-estimated latent heating profiles compare favorably with instantaneous estimates based upon dual-Doppler radar observations, and time series of surface rain-rate and heating profiles are generally consistent with those derived from rawinsonde analyses. Still, some biases in profile shape are evident, and these may be resolved with (a) additional contextual information brought to the estimation problem and/or (b) physically consistent and representative databases supporting the algorithm. A model of the random error in instantaneous 0.5 deg. -resolution rain-rate estimates appears to be consistent with the levels of error determined from TMI comparisons with collocated radar. Error model modifications for nonraining situations will be required, however. Sampling error represents only a portion of the total error in monthly 2.5 -resolution TMI estimates; the remaining error is attributed to random and systematic algorithm errors arising from the physical inconsistency and/or nonrepresentativeness of cloud-resolving-model-simulated profiles that support the algorithm.
A Fast Projection-Based Algorithm for Clustering Big Data.
Wu, Yun; He, Zhiquan; Lin, Hao; Zheng, Yufei; Zhang, Jingfen; Xu, Dong
2018-06-07
With the fast development of various techniques, more and more data have been accumulated with the unique properties of large size (tall) and high dimension (wide). The era of big data is coming. How to understand and discover new knowledge from these data has attracted more and more scholars' attention and has become the most important task in data mining. As one of the most important techniques in data mining, clustering analysis, a kind of unsupervised learning, could group a set data into objectives(clusters) that are meaningful, useful, or both. Thus, the technique has played very important role in knowledge discovery in big data. However, when facing the large-sized and high-dimensional data, most of the current clustering methods exhibited poor computational efficiency and high requirement of computational source, which will prevent us from clarifying the intrinsic properties and discovering the new knowledge behind the data. Based on this consideration, we developed a powerful clustering method, called MUFOLD-CL. The principle of the method is to project the data points to the centroid, and then to measure the similarity between any two points by calculating their projections on the centroid. The proposed method could achieve linear time complexity with respect to the sample size. Comparison with K-Means method on very large data showed that our method could produce better accuracy and require less computational time, demonstrating that the MUFOLD-CL can serve as a valuable tool, at least may play a complementary role to other existing methods, for big data clustering. Further comparisons with state-of-the-art clustering methods on smaller datasets showed that our method was fastest and achieved comparable accuracy. For the convenience of most scholars, a free soft package was constructed.
Method of the Determination of Exterior Orientation of Sensors in Hilbert Type Space.
Stępień, Grzegorz
2018-03-17
The following article presents a new isometric transformation algorithm based on the transformation in the newly normed Hilbert type space. The presented method is based on so-called virtual translations, already known in advance, of two relative oblique orthogonal coordinate systems-interior and exterior orientation of sensors-to a common, known in both systems, point. Each of the systems is translated along its axis (the systems have common origins) and at the same time the angular relative orientation of both coordinate systems is constant. The translation of both coordinate systems is defined by the spatial norm determining the length of vectors in the new Hilbert type space. As such, the displacement of two relative oblique orthogonal systems is reduced to zero. This makes it possible to directly calculate the rotation matrix of the sensor. The next and final step is the return translation of the system along an already known track. The method can be used for big rotation angles. The method was verified in laboratory conditions for the test data set and measurement data (field data). The accuracy of the results in the laboratory test is on the level of 10 -6 of the input data. This confirmed the correctness of the assumed calculation method. The method is a further development of the author's 2017 Total Free Station (TFS) transformation to several centroids in Hilbert type space. This is the reason why the method is called Multi-Centroid Isometric Transformation-MCIT. MCIT is very fast and enables, by reducing to zero the translation of two relative oblique orthogonal coordinate systems, direct calculation of the exterior orientation of the sensors.
Examination of Multiple Lithologies Within the Primitive Ordinary Chondrite NWA 5717
NASA Technical Reports Server (NTRS)
Cato, M. J.; Simon, J. I.; Ross, D. K.; Morris, R. V.
2017-01-01
Northwest Africa 5717 is a primitive (subtype 3.05) ungrouped ordinary chondrite which contains two apparently distinct lithologies. In large cut meteorite slabs, the darker of these, lithology A, looks to host the second, much lighter in color, lithology B (upper left, Fig. 1). The nature of the boundary between the two is uncertain, ranging from abrupt to gradational and not always following particle boundaries. The distinction between the lithologies, beyond the obvious color differences, has been supported by a discrepancy in oxygen isotopes and an incongruity in the magnesium contents of chondrule olivine. Here, quantitative textural analysis and mineralogical methods have been used to investigate the two apparent lithologies within NWA 5717. Olivine grains contained in a thin section from NWA 7402, thought to be paired to 5717, were also measured to re-examine the distinct compositional range among the light and dark areas. Procedure: Particles from a high-resolution mosaic image of a roughly 13x15cm slice of NWA 5717 were traced in Adobe Photoshop. Due to the large size of the sample, visually representative regions of each lithology were chosen to be analyzed. The resulting layers of digitized particles were imported into ImageJ, which was used to measure their area, along with the axes, the angle from horizontal, and the centroid coordinates of ellipses fitted to each particle following the approach. Resulting 2D pixel areas were converted to spherical diameters employing the unfolding algorithm, which outputs a 3D particle size distribution based on digitized 2D size frequency data. Spatstat was used to create kernel density plots of the centroid coordinates for each region. X-ray compositional maps, microprobe analyses, and Mossbauer spectroscopy was conducted on a thin section of NWA 7402, tentatively paired to NWA 5717.
A Novel Multi-Aperture Based Sun Sensor Based on a Fast Multi-Point MEANSHIFT (FMMS) Algorithm
You, Zheng; Sun, Jian; Xing, Fei; Zhang, Gao-Fei
2011-01-01
With the current increased widespread interest in the development and applications of micro/nanosatellites, it was found that we needed to design a small high accuracy satellite attitude determination system, because the star trackers widely used in large satellites are large and heavy, and therefore not suitable for installation on micro/nanosatellites. A Sun sensor + magnetometer is proven to be a better alternative, but the conventional sun sensor has low accuracy, and cannot meet the requirements of the attitude determination systems of micro/nanosatellites, so the development of a small high accuracy sun sensor with high reliability is very significant. This paper presents a multi-aperture based sun sensor, which is composed of a micro-electro-mechanical system (MEMS) mask with 36 apertures and an active pixels sensor (APS) CMOS placed below the mask at a certain distance. A novel fast multi-point MEANSHIFT (FMMS) algorithm is proposed to improve the accuracy and reliability, the two key performance features, of an APS sun sensor. When the sunlight illuminates the sensor, a sun spot array image is formed on the APS detector. Then the sun angles can be derived by analyzing the aperture image location on the detector via the FMMS algorithm. With this system, the centroid accuracy of the sun image can reach 0.01 pixels, without increasing the weight and power consumption, even when some missing apertures and bad pixels appear on the detector due to aging of the devices and operation in a harsh space environment, while the pointing accuracy of the single-aperture sun sensor using the conventional correlation algorithm is only 0.05 pixels. PMID:22163770
Real-time optical flow estimation on a GPU for a skied-steered mobile robot
NASA Astrophysics Data System (ADS)
Kniaz, V. V.
2016-04-01
Accurate egomotion estimation is required for mobile robot navigation. Often the egomotion is estimated using optical flow algorithms. For an accurate estimation of optical flow most of modern algorithms require high memory resources and processor speed. However simple single-board computers that control the motion of the robot usually do not provide such resources. On the other hand, most of modern single-board computers are equipped with an embedded GPU that could be used in parallel with a CPU to improve the performance of the optical flow estimation algorithm. This paper presents a new Z-flow algorithm for efficient computation of an optical flow using an embedded GPU. The algorithm is based on the phase correlation optical flow estimation and provide a real-time performance on a low cost embedded GPU. The layered optical flow model is used. Layer segmentation is performed using graph-cut algorithm with a time derivative based energy function. Such approach makes the algorithm both fast and robust in low light and low texture conditions. The algorithm implementation for a Raspberry Pi Model B computer is discussed. For evaluation of the algorithm the computer was mounted on a Hercules mobile skied-steered robot equipped with a monocular camera. The evaluation was performed using a hardware-in-the-loop simulation and experiments with Hercules mobile robot. Also the algorithm was evaluated using KITTY Optical Flow 2015 dataset. The resulting endpoint error of the optical flow calculated with the developed algorithm was low enough for navigation of the robot along the desired trajectory.
Efficient algorithms for single-axis attitude estimation
NASA Technical Reports Server (NTRS)
Shuster, M. D.
1981-01-01
The computationally efficient algorithms determine attitude from the measurement of art lengths and dihedral angles. The dependence of these algorithms on the solution of trigonometric equations was reduced. Both single time and batch estimators are presented along with the covariance analysis of each algorithm.
Contour-based object orientation estimation
NASA Astrophysics Data System (ADS)
Alpatov, Boris; Babayan, Pavel
2016-04-01
Real-time object orientation estimation is an actual problem of computer vision nowadays. In this paper we propose an approach to estimate an orientation of objects lacking axial symmetry. Proposed algorithm is intended to estimate orientation of a specific known 3D object, so 3D model is required for learning. The proposed orientation estimation algorithm consists of 2 stages: learning and estimation. Learning stage is devoted to the exploring of studied object. Using 3D model we can gather set of training images by capturing 3D model from viewpoints evenly distributed on a sphere. Sphere points distribution is made by the geosphere principle. It minimizes the training image set. Gathered training image set is used for calculating descriptors, which will be used in the estimation stage of the algorithm. The estimation stage is focusing on matching process between an observed image descriptor and the training image descriptors. The experimental research was performed using a set of images of Airbus A380. The proposed orientation estimation algorithm showed good accuracy (mean error value less than 6°) in all case studies. The real-time performance of the algorithm was also demonstrated.
Yock, Adam D; Kim, Gwe-Ya
2017-09-01
To present the k-means clustering algorithm as a tool to address treatment planning considerations characteristic of stereotactic radiosurgery using a single isocenter for multiple targets. For 30 patients treated with stereotactic radiosurgery for multiple brain metastases, the geometric centroids and radii of each met were determined from the treatment planning system. In-house software used this as well as weighted and unweighted versions of the k-means clustering algorithm to group the targets to be treated with a single isocenter, and to position each isocenter. The algorithm results were evaluated using within-cluster sum of squares as well as a minimum target coverage metric that considered the effect of target size. Both versions of the algorithm were applied to an example patient to demonstrate the prospective determination of the appropriate number and location of isocenters. Both weighted and unweighted versions of the k-means algorithm were applied successfully to determine the number and position of isocenters. Comparing the two, both the within-cluster sum of squares metric and the minimum target coverage metric resulting from the unweighted version were less than those from the weighted version. The average magnitudes of the differences were small (-0.2 cm 2 and 0.1% for the within cluster sum of squares and minimum target coverage, respectively) but statistically significant (Wilcoxon signed-rank test, P < 0.01). The differences between the versions of the k-means clustering algorithm represented an advantage of the unweighted version for the within-cluster sum of squares metric, and an advantage of the weighted version for the minimum target coverage metric. While additional treatment planning considerations have a large influence on the final treatment plan quality, both versions of the k-means algorithm provide automatic, consistent, quantitative, and objective solutions to the tasks associated with SRS treatment planning using a single isocenter for multiple targets. © 2017 The Authors. Journal of Applied Clinical Medical Physics published by Wiley Periodicals, Inc. on behalf of American Association of Physicists in Medicine.
Lee, Jung Keun; Park, Edward J.; Robinovitch, Stephen N.
2012-01-01
This paper proposes a Kalman filter-based attitude (i.e., roll and pitch) estimation algorithm using an inertial sensor composed of a triaxial accelerometer and a triaxial gyroscope. In particular, the proposed algorithm has been developed for accurate attitude estimation during dynamic conditions, in which external acceleration is present. Although external acceleration is the main source of the attitude estimation error and despite the need for its accurate estimation in many applications, this problem that can be critical for the attitude estimation has not been addressed explicitly in the literature. Accordingly, this paper addresses the combined estimation problem of the attitude and external acceleration. Experimental tests were conducted to verify the performance of the proposed algorithm in various dynamic condition settings and to provide further insight into the variations in the estimation accuracy. Furthermore, two different approaches for dealing with the estimation problem during dynamic conditions were compared, i.e., threshold-based switching approach versus acceleration model-based approach. Based on an external acceleration model, the proposed algorithm was capable of estimating accurate attitudes and external accelerations for short accelerated periods, showing its high effectiveness during short-term fast dynamic conditions. Contrariwise, when the testing condition involved prolonged high external accelerations, the proposed algorithm exhibited gradually increasing errors. However, as soon as the condition returned to static or quasi-static conditions, the algorithm was able to stabilize the estimation error, regaining its high estimation accuracy. PMID:22977288
A comparison of Q-factor estimation methods for marine seismic data
NASA Astrophysics Data System (ADS)
Kwon, J.; Ha, J.; Shin, S.; Chung, W.; Lim, C.; Lee, D.
2016-12-01
The seismic imaging technique draws information from inside the earth using seismic reflection and transmission data. This technique is an important method in geophysical exploration. Also, it has been employed widely as a means of locating oil and gas reservoirs because it offers information on geological media. There is much recent and active research into seismic attenuation and how it determines the quality of seismic imaging. Seismic attenuation is determined by various geological characteristics, through the absorption or scattering that occurs when the seismic wave passes through a geological medium. The seismic attenuation can be defined using an attenuation coefficient and represented as a non-dimensional variable known as the Q-factor. Q-factor is a unique characteristic of a geological medium. It is a very important material property for oil and gas resource development. Q-factor can be used to infer other characteristics of a medium, such as porosity, permeability and viscosity, and can directly indicate the presence of hydrocarbons to identify oil and gas bearing areas from the seismic data. There are various ways to estimate Q-factor in three different domains. In the time domain, pulse amplitude decay, pulse rising time, and pulse broadening are representative. Logarithm spectral ratio (LSR), centroid frequency shift (CFS), and peak frequency shift (PFS) are used in the frequency domain. In the time-frequency domain, Wavelet's Envelope Peak Instantaneous Frequency (WEPIF) is most frequently employed. In this study, we estimated and analyzed the Q-factor through the numerical model test and used 4 methods: the LSR, CFS, PFS, and WEPIF. Before we applied these 4 methods to observed data, we experimented with the numerical model test. The numerical model test data is derived from Norsar-2D, which is the basis of the ray-tracing algorithm, and we used reflection and normal incidence surveys to calculate Q-factor according to the array of sources and receivers. After the numerical model test, we chose the most accurate of the 4 methods by comparing Q-factor through reflection and normal incidence surveys. We applied the method to the observed data and proved its accuracy.
Immune Centroids Over-Sampling Method for Multi-Class Classification
2015-05-22
recognize to specific antigens . The response of a receptor to an antigen can activate its hosting B-cell. Activated B-cell then proliferates and...modifying N.K. Jerne’s theory. The theory states that in a pre-existing group of lympho- cytes ( specifically B cells), a specific antigen only...the clusters of each small class, which have high data density, called global immune centroids over-sampling (denoted as Global-IC). Specifically
A motion detection system for AXAF X-ray ground testing
NASA Technical Reports Server (NTRS)
Arenberg, Jonathan W.; Texter, Scott C.
1993-01-01
The concept, implementation, and performance of the motion detection system (MDS) designed as a diagnostic for X-ray ground testing for AXAF are described. The purpose of the MDS is to measure the magnitude of a relative rigid body motion among the AXAF test optic, the X-ray source, and X-ray focal plane detector. The MDS consists of a point source, lens, centroid detector, transimpedance amplifier, and computer system. Measurement of the centroid position of the image of the optical point source provides a direct measure of the motions of the X-ray optical system. The outputs from the detector and filter/amplifier are digitized and processed using the calibration with a 50 Hz bandwidth to give the centroid's location on the detector. Resolution of 0.008 arcsec has been achieved by this system. Data illustrating the performance of the motion detection system are also presented.
Assessment of auditory impression of the coolness and warmness of automotive HVAC noise.
Nakagawa, Seiji; Hotehama, Takuya; Kamiya, Masaru
2017-07-01
Noise induced by a heating, ventilation and air conditioning (HVAC) system in a vehicle is an important factor that affects the comfort of the interior of a car cabin. Much effort has been devoted to reduce noise levels, however, there is a need for a new sound design that addresses the noise problem from a different point of view. In this study, focusing on the auditory impression of automotive HVAC noise concerning coolness and warmness, psychoacoustical listening tests were performed using a paired comparison technique under various conditions of room temperature. Five stimuli were synthesized by stretching the spectral envelopes of recorded automotive HVAC noise to assess the effect of the spectral centroid, and were presented to normal-hearing subjects. Results show that the spectral centroid significantly affects the auditory impression concerning coolness and warmness; a higher spectral centroid induces a cooler auditory impression regardless of the room temperature.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kagie, Matthew J.; Lanterman, Aaron D.
2017-12-01
This paper addresses parameter estimation for an optical transient signal when the received data has been right-censored. We develop an expectation-maximization (EM) algorithm to estimate the amplitude of a Poisson intensity with a known shape in the presence of additive background counts, where the measurements are subject to saturation effects. We compare the results of our algorithm with those of an EM algorithm that is unaware of the censoring.
NASA Astrophysics Data System (ADS)
Negi, Sanjay S.; Paul, Ajay; Cesca, Simone; Kamal; Kriegerowski, Marius; Mahesh, P.; Gupta, Sandeep
2017-08-01
In order to understand present day earthquake kinematics at the Indian plate boundary, we analyse seismic broadband data recorded between 2007 and 2015 by the regional network in the Garhwal-Kumaun region, northwest Himalaya. We first estimate a local 1-D velocity model for the computation of reliable Green's functions, based on 2837 P-wave and 2680 S-wave arrivals from 251 well located earthquakes. The resulting 1-D crustal structure yields a 4-layer velocity model down to the depths of 20 km. A fifth homogeneous layer extends down to 46 km, constraining the Moho using travel-time distance curve method. We then employ a multistep moment tensor (MT) inversion algorithm to infer seismic moment tensors of 11 moderate earthquakes with Mw magnitude in the range 4.0-5.0. The method provides a fast MT inversion for future monitoring of local seismicity, since Green's functions database has been prepared. To further support the moment tensor solutions, we additionally model P phase beams at seismic arrays at teleseismic distances. The MT inversion result reveals the presence of dominant thrust fault kinematics persisting along the Himalayan belt. Shallow low and high angle thrust faulting is the dominating mechanism in the Garhwal-Kumaun Himalaya. The centroid depths for these moderate earthquakes are shallow between 1 and 12 km. The beam modeling result confirm hypocentral depth estimates between 1 and 7 km. The updated seismicity, constrained source mechanism and depth results indicate typical setting of duplexes above the mid crustal ramp where slip is confirmed along out-of-sequence thrusting. The involvement of Tons thrust sheet in out-of-sequence thrusting indicate Tons thrust to be the principal active thrust at shallow depth in the Himalayan region. Our results thus support the critical taper wedge theory, where we infer the microseismicity cluster as a result of intense activity within the Lesser Himalayan Duplex (LHD) system.
Liu, Hong; Wang, Jie; Xu, Xiangyang; Song, Enmin; Wang, Qian; Jin, Renchao; Hung, Chih-Cheng; Fei, Baowei
2014-11-01
A robust and accurate center-frequency (CF) estimation (RACE) algorithm for improving the performance of the local sine-wave modeling (SinMod) method, which is a good motion estimation method for tagged cardiac magnetic resonance (MR) images, is proposed in this study. The RACE algorithm can automatically, effectively and efficiently produce a very appropriate CF estimate for the SinMod method, under the circumstance that the specified tagging parameters are unknown, on account of the following two key techniques: (1) the well-known mean-shift algorithm, which can provide accurate and rapid CF estimation; and (2) an original two-direction-combination strategy, which can further enhance the accuracy and robustness of CF estimation. Some other available CF estimation algorithms are brought out for comparison. Several validation approaches that can work on the real data without ground truths are specially designed. Experimental results on human body in vivo cardiac data demonstrate the significance of accurate CF estimation for SinMod, and validate the effectiveness of RACE in facilitating the motion estimation performance of SinMod. Copyright © 2014 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Zábranová, Eliška; Matyska, Ctirad
2014-10-01
After the 2010 Maule and 2011 Tohoku earthquakes the spheroidal modes up to 1 mHz were clearly registered by the Global Geodynamic Project (GGP) network of superconducting gravimeters (SG). Fundamental parameters in synthetic calculations of the signals are the quality factors of the modes. We study the role of their uncertainties in the centroid-moment-tensor (CMT) inversions. First, we have inverted the SG data from selected GGP stations to jointly determine the quality factors of these normal modes and the three low-frequency CMT components, Mrr,(Mϑϑ-Mφφ)/2 and Mϑφ, that generate the observed SG signal. We have used several-days-long records to minimize the trade-off between the quality factors and the CMT but it was not eliminated completely. We have also inverted each record separately to get error estimates of the obtained parameters. Consequently, we have employed the GGP records of 60-h lengths for several published modal-quality-factor sets and inverted only the same three CMT components. The obtained CMT tensors are close to the solution from the joint Q-CMT inversion of longer records and resulting variability of the CMT components is smaller than differences among routine agency solutions. Reliable low-frequency CMT components can thus be obtained for any quality factors from the studied sets.
Transform fault earthquakes in the North Atlantic: Source mechanisms and depth of faulting
NASA Technical Reports Server (NTRS)
Bergman, Eric A.; Solomon, Sean C.
1987-01-01
The centroid depths and source mechanisms of 12 large earthquakes on transform faults of the northern Mid-Atlantic Ridge were determined from an inversion of long-period body waveforms. The earthquakes occurred on the Gibbs, Oceanographer, Hayes, Kane, 15 deg 20 min, and Vema transforms. The depth extent of faulting during each earthquake was estimated from the centroid depth and the fault width. The source mechanisms for all events in this study display the strike slip motion expected for transform fault earthquakes; slip vector azimuths agree to 2 to 3 deg of the local strike of the zone of active faulting. The only anomalies in mechanism were for two earthquakes near the western end of the Vema transform which occurred on significantly nonvertical fault planes. Secondary faulting, occurring either precursory to or near the end of the main episode of strike-slip rupture, was observed for 5 of the 12 earthquakes. For three events the secondary faulting was characterized by reverse motion on fault planes striking oblique to the trend of the transform. In all three cases, the site of secondary reverse faulting is near a compression jog in the current trace of the active transform fault zone. No evidence was found to support the conclusions of Engeln, Wiens, and Stein that oceanic transform faults in general are either hotter than expected from current thermal models or weaker than normal oceanic lithosphere.
Sen, Sandeep; Gode, Ameya; Ramanujam, Srirama; Ravikanth, G; Aravind, N A
2016-11-01
The center of diversity of Piper nigrum L. (Black Pepper), one of the highly valued spice crops is reported to be from India. Black pepper is naturally distributed in India in the Western Ghats biodiversity hotspot and is the only known existing source of its wild germplasm in the world. We used ecological niche models to predict the potential distribution of wild P. nigrum in the present and two future climate change scenarios viz (A1B) and (A2A) for the year 2080. Three topographic and nine uncorrelated bioclim variables were used to develop the niche models. The environmental variables influencing the distribution of wild P. nigrum across different climate change scenarios were identified. We also assessed the direction and magnitude of the niche centroid shift and the change in niche breadth to estimate the impact of projected climate change on the distribution of P. nigrum. The study shows a niche centroid shift in the future climate scenarios. Both the projected future climate scenarios predicted a reduction in the habitat of P. nigrum in Southern Western Ghats, which harbors many wild accessions of P. nigrum. Our results highlight the impact of future climate change on P. nigrum and provide useful information for designing sound germplasm conservation strategies for P. nigrum.
NASA Astrophysics Data System (ADS)
Polack, J. K.; Flaska, M.; Enqvist, A.; Sosa, C. S.; Lawrence, C. C.; Pozzi, S. A.
2015-09-01
Organic scintillators are frequently used for measurements that require sensitivity to both photons and fast neutrons because of their pulse shape discrimination capabilities. In these measurement scenarios, particle identification is commonly handled using the charge-integration pulse shape discrimination method. This method works particularly well for high-energy depositions, but is prone to misclassification for relatively low-energy depositions. A novel algorithm has been developed for automatically performing charge-integration pulse shape discrimination in a consistent and repeatable manner. The algorithm is able to estimate the photon and neutron misclassification corresponding to the calculated discrimination parameters, and is capable of doing so using only the information measured by a single organic scintillator. This paper describes the algorithm and assesses its performance by comparing algorithm-estimated misclassification to values computed via a more traditional time-of-flight estimation. A single data set was processed using four different low-energy thresholds: 40, 60, 90, and 120 keVee. Overall, the results compared well between the two methods; in most cases, the algorithm-estimated values fell within the uncertainties of the TOF-estimated values.
Oscillometric Blood Pressure Estimation: Past, Present, and Future.
Forouzanfar, Mohamad; Dajani, Hilmi R; Groza, Voicu Z; Bolic, Miodrag; Rajan, Sreeraman; Batkin, Izmail
2015-01-01
The use of automated blood pressure (BP) monitoring is growing as it does not require much expertise and can be performed by patients several times a day at home. Oscillometry is one of the most common measurement methods used in automated BP monitors. A review of the literature shows that a large variety of oscillometric algorithms have been developed for accurate estimation of BP but these algorithms are scattered in many different publications or patents. Moreover, considering that oscillometric devices dominate the home BP monitoring market, little effort has been made to survey the underlying algorithms that are used to estimate BP. In this review, a comprehensive survey of the existing oscillometric BP estimation algorithms is presented. The survey covers a broad spectrum of algorithms including the conventional maximum amplitude and derivative oscillometry as well as the recently proposed learning algorithms, model-based algorithms, and algorithms that are based on analysis of pulse morphology and pulse transit time. The aim is to classify the diverse underlying algorithms, describe each algorithm briefly, and discuss their advantages and disadvantages. This paper will also review the artifact removal techniques in oscillometry and the current standards for the automated BP monitors.
NASA Astrophysics Data System (ADS)
Chang, Yaping; Qin, Dahe; Ding, Yongjian; Zhao, Qiudong; Zhang, Shiqiang
2018-06-01
The long-term change of evapotranspiration (ET) is crucial for managing water resources in areas with extreme climates, such as the Tibetan Plateau (TP). This study proposed a modified algorithm for estimating ET based on the MOD16 algorithm on a global scale over alpine meadow on the TP in China. Wind speed and vegetation height were integrated to estimate aerodynamic resistance, while the temperature and moisture constraints for stomatal conductance were revised based on the technique proposed by Fisher et al. (2008). Moreover, Fisher's method for soil evaporation was adopted to reduce the uncertainty in soil evaporation estimation. Five representative alpine meadow sites on the TP were selected to investigate the performance of the modified algorithm. Comparisons were made between the ET observed using the Eddy Covariance (EC) and estimated using both the original and modified algorithms. The results revealed that the modified algorithm performed better than the original MOD16 algorithm with the coefficient of determination (R2) increasing from 0.26 to 0.68, and root mean square error (RMSE) decreasing from 1.56 to 0.78 mm d-1. The modified algorithm performed slightly better with a higher R2 (0.70) and lower RMSE (0.61 mm d-1) for after-precipitation days than for non-precipitation days at Suli site. Contrarily, better results were obtained for non-precipitation days than for after-precipitation days at Arou, Tanggula, and Hulugou sites, indicating that the modified algorithm may be more suitable for estimating ET for non-precipitation days with higher accuracy than for after-precipitation days, which had large observation errors. The comparisons between the modified algorithm and two mainstream methods suggested that the modified algorithm could produce high accuracy ET over the alpine meadow sites on the TP.
SDR input power estimation algorithms
NASA Astrophysics Data System (ADS)
Briones, J. C.; Nappier, J. M.
The General Dynamics (GD) S-Band software defined radio (SDR) in the Space Communications and Navigation (SCAN) Testbed on the International Space Station (ISS) provides experimenters an opportunity to develop and demonstrate experimental waveforms in space. The SDR has an analog and a digital automatic gain control (AGC) and the response of the AGCs to changes in SDR input power and temperature was characterized prior to the launch and installation of the SCAN Testbed on the ISS. The AGCs were used to estimate the SDR input power and SNR of the received signal and the characterization results showed a nonlinear response to SDR input power and temperature. In order to estimate the SDR input from the AGCs, three algorithms were developed and implemented on the ground software of the SCAN Testbed. The algorithms include a linear straight line estimator, which used the digital AGC and the temperature to estimate the SDR input power over a narrower section of the SDR input power range. There is a linear adaptive filter algorithm that uses both AGCs and the temperature to estimate the SDR input power over a wide input power range. Finally, an algorithm that uses neural networks was designed to estimate the input power over a wide range. This paper describes the algorithms in detail and their associated performance in estimating the SDR input power.
SDR Input Power Estimation Algorithms
NASA Technical Reports Server (NTRS)
Nappier, Jennifer M.; Briones, Janette C.
2013-01-01
The General Dynamics (GD) S-Band software defined radio (SDR) in the Space Communications and Navigation (SCAN) Testbed on the International Space Station (ISS) provides experimenters an opportunity to develop and demonstrate experimental waveforms in space. The SDR has an analog and a digital automatic gain control (AGC) and the response of the AGCs to changes in SDR input power and temperature was characterized prior to the launch and installation of the SCAN Testbed on the ISS. The AGCs were used to estimate the SDR input power and SNR of the received signal and the characterization results showed a nonlinear response to SDR input power and temperature. In order to estimate the SDR input from the AGCs, three algorithms were developed and implemented on the ground software of the SCAN Testbed. The algorithms include a linear straight line estimator, which used the digital AGC and the temperature to estimate the SDR input power over a narrower section of the SDR input power range. There is a linear adaptive filter algorithm that uses both AGCs and the temperature to estimate the SDR input power over a wide input power range. Finally, an algorithm that uses neural networks was designed to estimate the input power over a wide range. This paper describes the algorithms in detail and their associated performance in estimating the SDR input power.
An Eccentricity Based Data Routing Protocol with Uniform Node Distribution in 3D WSN.
Hosen, A S M Sanwar; Cho, Gi Hwan; Ra, In-Ho
2017-09-16
Due to nonuniform node distribution, the energy consumption of nodes are imbalanced in clustering-based wireless sensor networks (WSNs). It might have more impact when nodes are deployed in a three-dimensional (3D) environment. In this regard, we propose the eccentricity based data routing (EDR) protocol in a 3D WSN with uniform node distribution. It includes network partitions called 3D subspaces/clusters of equal member nodes, an energy-efficient routing centroid (RC) nodes election and data routing algorithm. The RC nodes election conducts in a quasi-static nature until a certain period unlike the periodic cluster heads election of typical clustering-based routing. It not only reduces the energy consumption of nodes during the election phase, but also in intra-communication. At the same time, the routing algorithm selects a forwarding node in such a way that balances the energy consumption among RC nodes and reduces the number of hops towards the sink. The simulation results validate and ensure the performance supremacy of the EDR protocol compared to existing protocols in terms of various metrics such as steady state and network lifetime in particular. Meanwhile, the results show the EDR is more robust in uniform node distribution compared to nonuniform.
Ichikawa, Kazuki; Morishita, Shinichi
2014-01-01
K-means clustering has been widely used to gain insight into biological systems from large-scale life science data. To quantify the similarities among biological data sets, Pearson correlation distance and standardized Euclidean distance are used most frequently; however, optimization methods have been largely unexplored. These two distance measurements are equivalent in the sense that they yield the same k-means clustering result for identical sets of k initial centroids. Thus, an efficient algorithm used for one is applicable to the other. Several optimization methods are available for the Euclidean distance and can be used for processing the standardized Euclidean distance; however, they are not customized for this context. We instead approached the problem by studying the properties of the Pearson correlation distance, and we invented a simple but powerful heuristic method for markedly pruning unnecessary computation while retaining the final solution. Tests using real biological data sets with 50-60K vectors of dimensions 10-2001 (~400 MB in size) demonstrated marked reduction in computation time for k = 10-500 in comparison with other state-of-the-art pruning methods such as Elkan's and Hamerly's algorithms. The BoostKCP software is available at http://mlab.cb.k.u-tokyo.ac.jp/~ichikawa/boostKCP/.
An Eccentricity Based Data Routing Protocol with Uniform Node Distribution in 3D WSN
Hosen, A. S. M. Sanwar; Cho, Gi Hwan; Ra, In-Ho
2017-01-01
Due to nonuniform node distribution, the energy consumption of nodes are imbalanced in clustering-based wireless sensor networks (WSNs). It might have more impact when nodes are deployed in a three-dimensional (3D) environment. In this regard, we propose the eccentricity based data routing (EDR) protocol in a 3D WSN with uniform node distribution. It includes network partitions called 3D subspaces/clusters of equal member nodes, an energy-efficient routing centroid (RC) nodes election and data routing algorithm. The RC nodes election conducts in a quasi-static nature until a certain period unlike the periodic cluster heads election of typical clustering-based routing. It not only reduces the energy consumption of nodes during the election phase, but also in intra-communication. At the same time, the routing algorithm selects a forwarding node in such a way that balances the energy consumption among RC nodes and reduces the number of hops towards the sink. The simulation results validate and ensure the performance supremacy of the EDR protocol compared to existing protocols in terms of various metrics such as steady state and network lifetime in particular. Meanwhile, the results show the EDR is more robust in uniform node distribution compared to nonuniform. PMID:28926958
Evaluation of deformable image registration and a motion model in CT images with limited features.
Liu, F; Hu, Y; Zhang, Q; Kincaid, R; Goodman, K A; Mageras, G S
2012-05-07
Deformable image registration (DIR) is increasingly used in radiotherapy applications and provides the basis for a previously described model of patient-specific respiratory motion. We examine the accuracy of a DIR algorithm and a motion model with respiration-correlated CT (RCCT) images of software phantom with known displacement fields, physical deformable abdominal phantom with implanted fiducials in the liver and small liver structures in patient images. The motion model is derived from a principal component analysis that relates volumetric deformations with the motion of the diaphragm or fiducials in the RCCT. Patient data analysis compares DIR with rigid registration as ground truth: the mean ± standard deviation 3D discrepancy of liver structure centroid positions is 2.0 ± 2.2 mm. DIR discrepancy in the software phantom is 3.8 ± 2.0 mm in lung and 3.7 ± 1.8 mm in abdomen; discrepancies near the chest wall are larger than indicated by image feature matching. Marker's 3D discrepancy in the physical phantom is 3.6 ± 2.8 mm. The results indicate that visible features in the images are important for guiding the DIR algorithm. Motion model accuracy is comparable to DIR, indicating that two principal components are sufficient to describe DIR-derived deformation in these datasets.
The Approximability of Partial Vertex Covers in Trees.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mkrtchyan, Vahan; Parekh, Ojas D.; Segev, Danny
Motivated by applications in risk management of computational systems, we focus our attention on a special case of the partial vertex cover problem, where the underlying graph is assumed to be a tree. Here, we consider four possible versions of this setting, depending on whether vertices and edges are weighted or not. Two of these versions, where edges are assumed to be unweighted, are known to be polynomial-time solvable (Gandhi, Khuller, and Srinivasan, 2004). However, the computational complexity of this problem with weighted edges, and possibly with weighted vertices, has not been determined yet. The main contribution of this papermore » is to resolve these questions, by fully characterizing which variants of partial vertex cover remain intractable in trees, and which can be efficiently solved. In particular, we propose a pseudo-polynomial DP-based algorithm for the most general case of having weights on both edges and vertices, which is proven to be NPhard. This algorithm provides a polynomial-time solution method when weights are limited to edges, and combined with additional scaling ideas, leads to an FPTAS for the general case. A secondary contribution of this work is to propose a novel way of using centroid decompositions in trees, which could be useful in other settings as well.« less
Charlton, Peter H; Bonnici, Timothy; Tarassenko, Lionel; Clifton, David A; Beale, Richard; Watkinson, Peter J
2016-04-01
Over 100 algorithms have been proposed to estimate respiratory rate (RR) from the electrocardiogram (ECG) and photoplethysmogram (PPG). As they have never been compared systematically it is unclear which algorithm performs the best. Our primary aim was to determine how closely algorithms agreed with a gold standard RR measure when operating under ideal conditions. Secondary aims were: (i) to compare algorithm performance with IP, the clinical standard for continuous respiratory rate measurement in spontaneously breathing patients; (ii) to compare algorithm performance when using ECG and PPG; and (iii) to provide a toolbox of algorithms and data to allow future researchers to conduct reproducible comparisons of algorithms. Algorithms were divided into three stages: extraction of respiratory signals, estimation of RR, and fusion of estimates. Several interchangeable techniques were implemented for each stage. Algorithms were assembled using all possible combinations of techniques, many of which were novel. After verification on simulated data, algorithms were tested on data from healthy participants. RRs derived from ECG, PPG and IP were compared to reference RRs obtained using a nasal-oral pressure sensor using the limits of agreement (LOA) technique. 314 algorithms were assessed. Of these, 270 could operate on either ECG or PPG, and 44 on only ECG. The best algorithm had 95% LOAs of -4.7 to 4.7 bpm and a bias of 0.0 bpm when using the ECG, and -5.1 to 7.2 bpm and 1.0 bpm when using PPG. IP had 95% LOAs of -5.6 to 5.2 bpm and a bias of -0.2 bpm. Four algorithms operating on ECG performed better than IP. All high-performing algorithms consisted of novel combinations of time domain RR estimation and modulation fusion techniques. Algorithms performed better when using ECG than PPG. The toolbox of algorithms and data used in this study are publicly available.
Charlton, Peter H; Bonnici, Timothy; Tarassenko, Lionel; Clifton, David A; Beale, Richard; Watkinson, Peter J
2016-01-01
Abstract Over 100 algorithms have been proposed to estimate respiratory rate (RR) from the electrocardiogram (ECG) and photoplethysmogram (PPG). As they have never been compared systematically it is unclear which algorithm performs the best. Our primary aim was to determine how closely algorithms agreed with a gold standard RR measure when operating under ideal conditions. Secondary aims were: (i) to compare algorithm performance with IP, the clinical standard for continuous respiratory rate measurement in spontaneously breathing patients; (ii) to compare algorithm performance when using ECG and PPG; and (iii) to provide a toolbox of algorithms and data to allow future researchers to conduct reproducible comparisons of algorithms. Algorithms were divided into three stages: extraction of respiratory signals, estimation of RR, and fusion of estimates. Several interchangeable techniques were implemented for each stage. Algorithms were assembled using all possible combinations of techniques, many of which were novel. After verification on simulated data, algorithms were tested on data from healthy participants. RRs derived from ECG, PPG and IP were compared to reference RRs obtained using a nasal-oral pressure sensor using the limits of agreement (LOA) technique. 314 algorithms were assessed. Of these, 270 could operate on either ECG or PPG, and 44 on only ECG. The best algorithm had 95% LOAs of −4.7 to 4.7 bpm and a bias of 0.0 bpm when using the ECG, and −5.1 to 7.2 bpm and 1.0 bpm when using PPG. IP had 95% LOAs of −5.6 to 5.2 bpm and a bias of −0.2 bpm. Four algorithms operating on ECG performed better than IP. All high-performing algorithms consisted of novel combinations of time domain RR estimation and modulation fusion techniques. Algorithms performed better when using ECG than PPG. The toolbox of algorithms and data used in this study are publicly available. PMID:27027672
Evaluation and Application of Satellite-Based Latent Heating Profile Estimation Methods
NASA Technical Reports Server (NTRS)
Olson, William S.; Grecu, Mircea; Yang, Song; Tao, Wei-Kuo
2004-01-01
In recent years, methods for estimating atmospheric latent heating vertical structure from both passive and active microwave remote sensing have matured to the point where quantitative evaluation of these methods is the next logical step. Two approaches for heating algorithm evaluation are proposed: First, application of heating algorithms to synthetic data, based upon cloud-resolving model simulations, can be used to test the internal consistency of heating estimates in the absence of systematic errors in physical assumptions. Second, comparisons of satellite-retrieved vertical heating structures to independent ground-based estimates, such as rawinsonde-derived analyses of heating, provide an additional test. The two approaches are complementary, since systematic errors in heating indicated by the second approach may be confirmed by the first. A passive microwave and combined passive/active microwave heating retrieval algorithm are evaluated using the described approaches. In general, the passive microwave algorithm heating profile estimates are subject to biases due to the limited vertical heating structure information contained in the passive microwave observations. These biases may be partly overcome by including more environment-specific a priori information into the algorithm s database of candidate solution profiles. The combined passive/active microwave algorithm utilizes the much higher-resolution vertical structure information provided by spaceborne radar data to produce less biased estimates; however, the global spatio-temporal sampling by spaceborne radar is limited. In the present study, the passive/active microwave algorithm is used to construct a more physically-consistent and environment-specific set of candidate solution profiles for the passive microwave algorithm and to help evaluate errors in the passive algorithm s heating estimates. Although satellite estimates of latent heating are based upon instantaneous, footprint- scale data, suppression of random errors requires averaging to at least half-degree resolution. Analysis of mesoscale and larger space-time scale phenomena based upon passive and passive/active microwave heating estimates from TRMM, SSMI, and AMSR data will be presented at the conference.
Investigation of Seismic Events associated with the Sinkhole at Napoleonville Salt Dome, Louisiana
NASA Astrophysics Data System (ADS)
Nayak, A.; Dreger, D. S.
2015-12-01
This study describes the ongoing efforts in analysis of the intense sequence of complex seismic events associated with the formation of a large sinkhole at Napoleonville Salt Dome, Assumption Parish, Louisiana in August 2012. Point source centroid seismic moment tensor (MT) inversion of these events using data from a temporary network of broadband stations established by the United States Geological Survey had previously revealed large volume-increase components. We investigate the effect of 3D velocity structure of the salt dome on wave propagation in the frequency range of interest (0.1-0.3 Hz) by forward modeling synthetic waveforms using MT solutions that were computed using Green's functions assuming two separate 1D velocity models for stations over the salt dome and stations on the sedimentary strata surrounding the salt dome separately. We also use a matched filter technique to detect smaller events that went undetected by the automated grid-search based scanning and MT inversion algorithm using the waveforms of the larger events as templates. We also analyze the change in spectral content of the events, many of which exhibit a spectral peak at 0.4 Hz with a duration of > 60 seconds. The decrease in spectral amplitudes with distance also gives an estimate of high anelastic attenuation that damps reverberations within the shallow low velocity layers. Finally, we use noise cross-correlation analysis to explore changes in the green's functions during the development of the sinkhole and verify the sediment velocity model by comparing observed and synthetic surface wave dispersion.
NASA Astrophysics Data System (ADS)
Mehdinejadiani, Behrouz
2017-08-01
This study represents the first attempt to estimate the solute transport parameters of the spatial fractional advection-dispersion equation using Bees Algorithm. The numerical studies as well as the experimental studies were performed to certify the integrity of Bees Algorithm. The experimental ones were conducted in a sandbox for homogeneous and heterogeneous soils. A detailed comparative study was carried out between the results obtained from Bees Algorithm and those from Genetic Algorithm and LSQNONLIN routines in FracFit toolbox. The results indicated that, in general, the Bees Algorithm much more accurately appraised the sFADE parameters in comparison with Genetic Algorithm and LSQNONLIN, especially in the heterogeneous soil and for α values near to 1 in the numerical study. Also, the results obtained from Bees Algorithm were more reliable than those from Genetic Algorithm. The Bees Algorithm showed the relative similar performances for all cases, while the Genetic Algorithm and the LSQNONLIN yielded different performances for various cases. The performance of LSQNONLIN strongly depends on the initial guess values so that, compared to the Genetic Algorithm, it can more accurately estimate the sFADE parameters by taking into consideration the suitable initial guess values. To sum up, the Bees Algorithm was found to be very simple, robust and accurate approach to estimate the transport parameters of the spatial fractional advection-dispersion equation.
Mehdinejadiani, Behrouz
2017-08-01
This study represents the first attempt to estimate the solute transport parameters of the spatial fractional advection-dispersion equation using Bees Algorithm. The numerical studies as well as the experimental studies were performed to certify the integrity of Bees Algorithm. The experimental ones were conducted in a sandbox for homogeneous and heterogeneous soils. A detailed comparative study was carried out between the results obtained from Bees Algorithm and those from Genetic Algorithm and LSQNONLIN routines in FracFit toolbox. The results indicated that, in general, the Bees Algorithm much more accurately appraised the sFADE parameters in comparison with Genetic Algorithm and LSQNONLIN, especially in the heterogeneous soil and for α values near to 1 in the numerical study. Also, the results obtained from Bees Algorithm were more reliable than those from Genetic Algorithm. The Bees Algorithm showed the relative similar performances for all cases, while the Genetic Algorithm and the LSQNONLIN yielded different performances for various cases. The performance of LSQNONLIN strongly depends on the initial guess values so that, compared to the Genetic Algorithm, it can more accurately estimate the sFADE parameters by taking into consideration the suitable initial guess values. To sum up, the Bees Algorithm was found to be very simple, robust and accurate approach to estimate the transport parameters of the spatial fractional advection-dispersion equation. Copyright © 2017 Elsevier B.V. All rights reserved.
Improving clustering with metabolic pathway data.
Milone, Diego H; Stegmayer, Georgina; López, Mariana; Kamenetzky, Laura; Carrari, Fernando
2014-04-10
It is a common practice in bioinformatics to validate each group returned by a clustering algorithm through manual analysis, according to a-priori biological knowledge. This procedure helps finding functionally related patterns to propose hypotheses for their behavior and the biological processes involved. Therefore, this knowledge is used only as a second step, after data are just clustered according to their expression patterns. Thus, it could be very useful to be able to improve the clustering of biological data by incorporating prior knowledge into the cluster formation itself, in order to enhance the biological value of the clusters. A novel training algorithm for clustering is presented, which evaluates the biological internal connections of the data points while the clusters are being formed. Within this training algorithm, the calculation of distances among data points and neurons centroids includes a new term based on information from well-known metabolic pathways. The standard self-organizing map (SOM) training versus the biologically-inspired SOM (bSOM) training were tested with two real data sets of transcripts and metabolites from Solanum lycopersicum and Arabidopsis thaliana species. Classical data mining validation measures were used to evaluate the clustering solutions obtained by both algorithms. Moreover, a new measure that takes into account the biological connectivity of the clusters was applied. The results of bSOM show important improvements in the convergence and performance for the proposed clustering method in comparison to standard SOM training, in particular, from the application point of view. Analyses of the clusters obtained with bSOM indicate that including biological information during training can certainly increase the biological value of the clusters found with the proposed method. It is worth to highlight that this fact has effectively improved the results, which can simplify their further analysis.The algorithm is available as a web-demo at http://fich.unl.edu.ar/sinc/web-demo/bsom-lite/. The source code and the data sets supporting the results of this article are available at http://sourceforge.net/projects/sourcesinc/files/bsom.
Galaxy cluster center detection methods with weak lensing
NASA Astrophysics Data System (ADS)
Simet, Melanie
The precise location of galaxy cluster centers is a persistent problem in weak lensing mass estimates and in interpretations of clusters in a cosmological context. In this work, we test methods of centroid determination from weak lensing data and examine the effects of such self-calibration on the measured masses. Drawing on lensing data from the Sloan Digital Sky Survey Stripe 82, a 275 square degree region of coadded data in the Southern Galactic Cap, together with a catalog of MaxBCG clusters, we show that halo substructure as well as shape noise and stochasticity in galaxy positions limit the precision of such a self-calibration (in the context of Stripe 82, to ˜ 500 h-1 kpc or larger) and bias the mass estimates around these points to a level that is likely unacceptable for the purposes of making cosmological measurements. We also project the usefulness of this technique in future surveys.
Novel cooperative neural fusion algorithms for image restoration and image fusion.
Xia, Youshen; Kamel, Mohamed S
2007-02-01
To deal with the problem of restoring degraded images with non-Gaussian noise, this paper proposes a novel cooperative neural fusion regularization (CNFR) algorithm for image restoration. Compared with conventional regularization algorithms for image restoration, the proposed CNFR algorithm can relax need of the optimal regularization parameter to be estimated. Furthermore, to enhance the quality of restored images, this paper presents a cooperative neural fusion (CNF) algorithm for image fusion. Compared with existing signal-level image fusion algorithms, the proposed CNF algorithm can greatly reduce the loss of contrast information under blind Gaussian noise environments. The performance analysis shows that the proposed two neural fusion algorithms can converge globally to the robust and optimal image estimate. Simulation results confirm that in different noise environments, the proposed two neural fusion algorithms can obtain a better image estimate than several well known image restoration and image fusion methods.
Analysis of the Command and Control Segment (CCS) attitude estimation algorithm
NASA Technical Reports Server (NTRS)
Stockwell, Catherine
1993-01-01
This paper categorizes the qualitative behavior of the Command and Control Segment (CCS) differential correction algorithm as applied to attitude estimation using simultaneous spin axis sun angle and Earth cord length measurements. The categories of interest are the domains of convergence, divergence, and their boundaries. Three series of plots are discussed that show the dependence of the estimation algorithm on the vehicle radius, the sun/Earth angle, and the spacecraft attitude. Common qualitative dynamics to all three series are tabulated and discussed. Out-of-limits conditions for the estimation algorithm are identified and discussed.
NASA Technical Reports Server (NTRS)
Yang, Song; Olson, William S.; Wang, Jian-Jian; Bell, Thomas L.; Smith, Eric A.; Kummerow, Christian D.
2004-01-01
Rainfall rate estimates from space-borne k&ents are generally accepted as reliable by a majority of the atmospheric science commu&y. One-of the Tropical Rainfall Measuring Mission (TRh4M) facility rain rate algorithms is based upon passive microwave observations fiom the TRMM Microwave Imager (TMI). Part I of this study describes improvements in the TMI algorithm that are required to introduce cloud latent heating and drying as additional algorithm products. Here, estimates of surface rain rate, convective proportion, and latent heating are evaluated using independent ground-based estimates and satellite products. Instantaneous, OP5resolution estimates of surface rain rate over ocean fiom the improved TMI algorithm are well correlated with independent radar estimates (r approx. 0.88 over the Tropics), but bias reduction is the most significant improvement over forerunning algorithms. The bias reduction is attributed to the greater breadth of cloud-resolving model simulations that support the improved algorithm, and the more consistent and specific convective/stratiform rain separation method utilized. The bias of monthly, 2.5 deg. -resolution estimates is similarly reduced, with comparable correlations to radar estimates. Although the amount of independent latent heating data are limited, TMI estimated latent heating profiles compare favorably with instantaneous estimates based upon dual-Doppler radar observations, and time series of surface rain rate and heating profiles are generally consistent with those derived from rawinsonde analyses. Still, some biases in profile shape are evident, and these may be resolved with: (a) additional contextual information brought to the estimation problem, and/or; (b) physically-consistent and representative databases supporting the algorithm. A model of the random error in instantaneous, 0.5 deg-resolution rain rate estimates appears to be consistent with the levels of error determined from TMI comparisons to collocated radar. Error model modifications for non-raining situations will be required, however. Sampling error appears to represent only a fraction of the total error in monthly, 2S0-resolution TMI estimates; the remaining error is attributed to physical inconsistency or non-representativeness of cloud-resolving model simulated profiles supporting the algorithm.