Wang, Hao; Jiang, Jie; Zhang, Guangjun
2017-04-21
The simultaneous extraction of optical navigation measurements from a target celestial body and star images is essential for autonomous optical navigation. Generally, a single optical navigation sensor cannot simultaneously image the target celestial body and stars well-exposed because their irradiance difference is generally large. Multi-sensor integration or complex image processing algorithms are commonly utilized to solve the said problem. This study analyzes and demonstrates the feasibility of simultaneously imaging the target celestial body and stars well-exposed within a single exposure through a single field of view (FOV) optical navigation sensor using the well capacity adjusting (WCA) scheme. First, the irradiance characteristics of the celestial body are analyzed. Then, the celestial body edge model and star spot imaging model are established when the WCA scheme is applied. Furthermore, the effect of exposure parameters on the accuracy of star centroiding and edge extraction is analyzed using the proposed model. Optimal exposure parameters are also derived by conducting Monte Carlo simulation to obtain the best performance of the navigation sensor. Finally, laboratorial and night sky experiments are performed to validate the correctness of the proposed model and optimal exposure parameters.
Wang, Hao; Jiang, Jie; Zhang, Guangjun
2017-01-01
The simultaneous extraction of optical navigation measurements from a target celestial body and star images is essential for autonomous optical navigation. Generally, a single optical navigation sensor cannot simultaneously image the target celestial body and stars well-exposed because their irradiance difference is generally large. Multi-sensor integration or complex image processing algorithms are commonly utilized to solve the said problem. This study analyzes and demonstrates the feasibility of simultaneously imaging the target celestial body and stars well-exposed within a single exposure through a single field of view (FOV) optical navigation sensor using the well capacity adjusting (WCA) scheme. First, the irradiance characteristics of the celestial body are analyzed. Then, the celestial body edge model and star spot imaging model are established when the WCA scheme is applied. Furthermore, the effect of exposure parameters on the accuracy of star centroiding and edge extraction is analyzed using the proposed model. Optimal exposure parameters are also derived by conducting Monte Carlo simulation to obtain the best performance of the navigation sensor. Finally, laboratorial and night sky experiments are performed to validate the correctness of the proposed model and optimal exposure parameters. PMID:28430132
A star recognition method based on the Adaptive Ant Colony algorithm for star sensors.
Quan, Wei; Fang, Jiancheng
2010-01-01
A new star recognition method based on the Adaptive Ant Colony (AAC) algorithm has been developed to increase the star recognition speed and success rate for star sensors. This method draws circles, with the center of each one being a bright star point and the radius being a special angular distance, and uses the parallel processing ability of the AAC algorithm to calculate the angular distance of any pair of star points in the circle. The angular distance of two star points in the circle is solved as the path of the AAC algorithm, and the path optimization feature of the AAC is employed to search for the optimal (shortest) path in the circle. This optimal path is used to recognize the stellar map and enhance the recognition success rate and speed. The experimental results show that when the position error is about 50″, the identification success rate of this method is 98% while the Delaunay identification method is only 94%. The identification time of this method is up to 50 ms.
Adaptive particle swarm optimization for optimal orbital elements of binary stars
NASA Astrophysics Data System (ADS)
Attia, Abdel-Fattah
2016-12-01
The paper presents an adaptive particle swarm optimization (APSO) as an alternative method to determine the optimal orbital elements of the star η Bootis of MK type G0 IV. The proposed algorithm transforms the problem of finding periodic orbits into the problem of detecting global minimizers as a function, to get a best fit of Keplerian and Phase curves. The experimental results demonstrate that the proposed approach of APSO generally more accurate than the standard particle swarm optimization (PSO) and other published optimization algorithms, in terms of solution accuracy, convergence speed and algorithm reliability.
A triangle voting algorithm based on double feature constraints for star sensors
NASA Astrophysics Data System (ADS)
Fan, Qiaoyun; Zhong, Xuyang
2018-02-01
A novel autonomous star identification algorithm is presented in this study. In the proposed algorithm, each sensor star constructs multi-triangle with its bright neighbor stars and obtains its candidates by triangle voting process, in which the triangle is considered as the basic voting element. In order to accelerate the speed of this algorithm and reduce the required memory for star database, feature extraction is carried out to reduce the dimension of triangles and each triangle is described by its base and height. During the identification period, the voting scheme based on double feature constraints is proposed to implement triangle voting. This scheme guarantees that only the catalog star satisfying two features can vote for the sensor star, which improves the robustness towards false stars. The simulation and real star image test demonstrate that compared with the other two algorithms, the proposed algorithm is more robust towards position noise, magnitude noise and false stars.
A voting-based star identification algorithm utilizing local and global distribution
NASA Astrophysics Data System (ADS)
Fan, Qiaoyun; Zhong, Xuyang; Sun, Junhua
2018-03-01
A novel star identification algorithm based on voting scheme is presented in this paper. In the proposed algorithm, the global distribution and local distribution of sensor stars are fully utilized, and the stratified voting scheme is adopted to obtain the candidates for sensor stars. The database optimization is employed to reduce its memory requirement and improve the robustness of the proposed algorithm. The simulation shows that the proposed algorithm exhibits 99.81% identification rate with 2-pixel standard deviations of positional noises and 0.322-Mv magnitude noises. Compared with two similar algorithms, the proposed algorithm is more robust towards noise, and the average identification time and required memory is less. Furthermore, the real sky test shows that the proposed algorithm performs well on the real star images.
Minet, V; Baudar, J; Bailly, N; Douxfils, J; Laloy, J; Lessire, S; Gourdin, M; Devalet, B; Chatelain, B; Dogné, J M; Mullier, F
2014-06-01
Accurate diagnosis of heparin-induced thrombocytopenia (HIT) is essential but remains challenging. We have previously demonstrated, in a retrospective study, the usefulness of the combination of the 4Ts score, AcuStar HIT and heparin-induced multiple electrode aggregometry (HIMEA) with optimized thresholds. We aimed at exploring prospectively the performances of our optimized diagnostic algorithm on suspected HIT patients. The secondary objective is to evaluate performances of AcuStar HIT-Ab (PF4-H) in comparison with the clinical outcome. 116 inpatients with clinically suspected immune HIT were included. Our optimized diagnostic algorithm was applied to each patient. Sensitivity, specificity, negative predictive value (NPV), positive predictive value (PPV) of the overall diagnostic strategy as well as AcuStar HIT-Ab (at manufacturer's thresholds and at our thresholds) were calculated using clinical diagnosis as the reference. Among 116 patients, 2 patients had clinically-diagnosed HIT. These 2 patients were positive on AcuStar HIT-Ab, AcuStar HIT-IgG and HIMEA. Using our optimized algorithm, all patients were correctly diagnosed. AcuStar HIT-Ab at our cut-off (>9.41 U/mL) and at manufacturer's cut-off (>1.00 U/mL) showed both a sensitivity of 100.0% and a specificity of 99.1% and 90.4%, respectively. The combination of the 4Ts score, the HemosIL® AcuStar HIT and HIMEA with optimized thresholds may be useful for the rapid and accurate exclusion of the diagnosis of immune HIT. Copyright © 2014 Elsevier Ltd. All rights reserved.
SPEXTRA: Optimal extraction code for long-slit spectra in crowded fields
NASA Astrophysics Data System (ADS)
Sarkisyan, A. N.; Vinokurov, A. S.; Solovieva, Yu. N.; Sholukhova, O. N.; Kostenkov, A. E.; Fabrika, S. N.
2017-10-01
We present a code for the optimal extraction of long-slit 2D spectra in crowded stellar fields. Its main advantage and difference from the existing spectrum extraction codes is the presence of a graphical user interface (GUI) and a convenient visualization system of data and extraction parameters. On the whole, the package is designed to study stars in crowded fields of nearby galaxies and star clusters in galaxies. Apart from the spectrum extraction for several stars which are closely located or superimposed, it allows the spectra of objects to be extracted with subtraction of superimposed nebulae of different shapes and different degrees of ionization. The package can also be used to study single stars in the case of a strong background. In the current version, the optimal extraction of 2D spectra with an aperture and the Gaussian function as PSF (point spread function) is proposed. In the future, the package will be supplemented with the possibility to build a PSF based on a Moffat function. We present the details of GUI, illustrate main features of the package, and show results of extraction of the several interesting spectra of objects from different telescopes.
Enzymatic extraction of star gooseberry (Phyllanthus acidus) juice with high antioxidant level
NASA Astrophysics Data System (ADS)
Loan, Do Thi Thanh; Tra, Tran Thi Thu; Nguyet, Ton Nu Minh; Man, Le Van Viet
2017-09-01
Ascorbic acid and phenolic compounds are main antioxidants in star gooseberry (Phyllanthus acidus) fruit. In this study, Pectinex Ultra SP-L preparation with pectinase activity was used in the extraction of star gooseberry juice. The effects of pectinase concentration and biocatalytic time on the content of ascorbic acid, phenolic compounds and antioxidant activity of the fruit juice were firstly investigated. Response surface methodology was then used to optimize the conditions of enzymatic extraction for maximizing the antioxidant activity of the star gooseberry juice. The optimal pectinase concentration and biocatalytic time were 19 polygalacturonase units per 100g pulp dry weight and 67 min, respectively under which the maximal antioxidant activity achieved 5595±6 µmol Trolox equivalent per 100g juice dry weight. On the basis of kinetic model of second-order extraction, the extraction rate constant of ascorbic acid and phenolic compounds in the enzymatic extraction increased approximately 21% and 157%, respectively in comparison with that in the conventional extraction. Application of pectinase preparation to the fruit juice extraction was therefore potential for improvement in antioxidant level of the product.
A software package for evaluating the performance of a star sensor operation
NASA Astrophysics Data System (ADS)
Sarpotdar, Mayuresh; Mathew, Joice; Sreejith, A. G.; Nirmal, K.; Ambily, S.; Prakash, Ajin; Safonova, Margarita; Murthy, Jayant
2017-02-01
We have developed a low-cost off-the-shelf component star sensor ( StarSense) for use in minisatellites and CubeSats to determine the attitude of a satellite in orbit. StarSense is an imaging camera with a limiting magnitude of 6.5, which extracts information from star patterns it records in the images. The star sensor implements a centroiding algorithm to find centroids of the stars in the image, a Geometric Voting algorithm for star pattern identification, and a QUEST algorithm for attitude quaternion calculation. Here, we describe the software package to evaluate the performance of these algorithms as a star sensor single operating system. We simulate the ideal case where sky background and instrument errors are omitted, and a more realistic case where noise and camera parameters are added to the simulated images. We evaluate such performance parameters of the algorithms as attitude accuracy, calculation time, required memory, star catalog size, sky coverage, etc., and estimate the errors introduced by each algorithm. This software package is written for use in MATLAB. The testing is parametrized for different hardware parameters, such as the focal length of the imaging setup, the field of view (FOV) of the camera, angle measurement accuracy, distortion effects, etc., and therefore, can be applied to evaluate the performance of such algorithms in any star sensor. For its hardware implementation on our StarSense, we are currently porting the codes in form of functions written in C. This is done keeping in view its easy implementation on any star sensor electronics hardware.
An Extended Kalman Filter-Based Attitude Tracking Algorithm for Star Sensors
Li, Jian; Wei, Xinguo; Zhang, Guangjun
2017-01-01
Efficiency and reliability are key issues when a star sensor operates in tracking mode. In the case of high attitude dynamics, the performance of existing attitude tracking algorithms degenerates rapidly. In this paper an extended Kalman filtering-based attitude tracking algorithm is presented. The star sensor is modeled as a nonlinear stochastic system with the state estimate providing the three degree-of-freedom attitude quaternion and angular velocity. The star positions in the star image are predicted and measured to estimate the optimal attitude. Furthermore, all the cataloged stars observed in the sensor field-of-view according the predicted image motion are accessed using a catalog partition table to speed up the tracking, called star mapping. Software simulation and night-sky experiment are performed to validate the efficiency and reliability of the proposed method. PMID:28825684
An Extended Kalman Filter-Based Attitude Tracking Algorithm for Star Sensors.
Li, Jian; Wei, Xinguo; Zhang, Guangjun
2017-08-21
Efficiency and reliability are key issues when a star sensor operates in tracking mode. In the case of high attitude dynamics, the performance of existing attitude tracking algorithms degenerates rapidly. In this paper an extended Kalman filtering-based attitude tracking algorithm is presented. The star sensor is modeled as a nonlinear stochastic system with the state estimate providing the three degree-of-freedom attitude quaternion and angular velocity. The star positions in the star image are predicted and measured to estimate the optimal attitude. Furthermore, all the cataloged stars observed in the sensor field-of-view according the predicted image motion are accessed using a catalog partition table to speed up the tracking, called star mapping. Software simulation and night-sky experiment are performed to validate the efficiency and reliability of the proposed method.
Design and application of star map simulation system for star sensors
NASA Astrophysics Data System (ADS)
Wu, Feng; Shen, Weimin; Zhu, Xifang; Chen, Yuheng; Xu, Qinquan
2013-12-01
Modern star sensors are powerful to measure attitude automatically which assure a perfect performance of spacecrafts. They achieve very accurate attitudes by applying algorithms to process star maps obtained by the star camera mounted on them. Therefore, star maps play an important role in designing star cameras and developing procession algorithms. Furthermore, star maps supply significant supports to exam the performance of star sensors completely before their launch. However, it is not always convenient to supply abundant star maps by taking pictures of the sky. Thus, star map simulation with the aid of computer attracts a lot of interests by virtue of its low price and good convenience. A method to simulate star maps by programming and extending the function of the optical design program ZEMAX is proposed. The star map simulation system is established. Firstly, based on analyzing the working procedures of star sensors to measure attitudes and the basic method to design optical system by ZEMAX, the principle of simulating star sensor imaging is given out in detail. The theory about adding false stars and noises, and outputting maps is discussed and the corresponding approaches are proposed. Then, by external programming, the star map simulation program is designed and produced. Its user interference and operation are introduced. Applications of star map simulation method in evaluating optical system, star image extraction algorithm and star identification algorithm, and calibrating system errors are presented completely. It was proved that the proposed simulation method provides magnificent supports to the study on star sensors, and improves the performance of star sensors efficiently.
Spacecraft angular velocity estimation algorithm for star tracker based on optical flow techniques
NASA Astrophysics Data System (ADS)
Tang, Yujie; Li, Jian; Wang, Gangyi
2018-02-01
An integrated navigation system often uses the traditional gyro and star tracker for high precision navigation with the shortcomings of large volume, heavy weight and high-cost. With the development of autonomous navigation for deep space and small spacecraft, star tracker has been gradually used for attitude calculation and angular velocity measurement directly. At the same time, with the dynamic imaging requirements of remote sensing satellites and other imaging satellites, how to measure the angular velocity in the dynamic situation to improve the accuracy of the star tracker is the hotspot of future research. We propose the approach to measure angular rate with a nongyro and improve the dynamic performance of the star tracker. First, the star extraction algorithm based on morphology is used to extract the star region, and the stars in the two images are matched according to the method of angular distance voting. The calculation of the displacement of the star image is measured by the improved optical flow method. Finally, the triaxial angular velocity of the star tracker is calculated by the star vector using the least squares method. The method has the advantages of fast matching speed, strong antinoise ability, and good dynamic performance. The triaxial angular velocity of star tracker can be obtained accurately with these methods. So, the star tracker can achieve better tracking performance and dynamic attitude positioning accuracy to lay a good foundation for the wide application of various satellites and complex space missions.
Fast Quaternion Attitude Estimation from Two Vector Measurements
NASA Technical Reports Server (NTRS)
Markley, F. Landis; Bauer, Frank H. (Technical Monitor)
2001-01-01
Many spacecraft attitude determination methods use exactly two vector measurements. The two vectors are typically the unit vector to the Sun and the Earth's magnetic field vector for coarse "sun-mag" attitude determination or unit vectors to two stars tracked by two star trackers for fine attitude determination. Existing closed-form attitude estimates based on Wahba's optimality criterion for two arbitrarily weighted observations are somewhat slow to evaluate. This paper presents two new fast quaternion attitude estimation algorithms using two vector observations, one optimal and one suboptimal. The suboptimal method gives the same estimate as the TRIAD algorithm, at reduced computational cost. Simulations show that the TRIAD estimate is almost as accurate as the optimal estimate in representative test scenarios.
Kinematic model for the space-variant image motion of star sensors under dynamical conditions
NASA Astrophysics Data System (ADS)
Liu, Chao-Shan; Hu, Lai-Hong; Liu, Guang-Bin; Yang, Bo; Li, Ai-Jun
2015-06-01
A kinematic description of a star spot in the focal plane is presented for star sensors under dynamical conditions, which involves all necessary parameters such as the image motion, velocity, and attitude parameters of the vehicle. Stars at different locations of the focal plane correspond to the slightly different orientation and extent of motion blur, which characterize the space-variant point spread function. Finally, the image motion, the energy distribution, and centroid extraction are numerically investigated using the kinematic model under dynamic conditions. A centroid error of eight successive iterations <0.002 pixel is used as the termination criterion for the Richardson-Lucy deconvolution algorithm. The kinematic model of a star sensor is useful for evaluating the compensation algorithms of motion-blurred images.
NASA Astrophysics Data System (ADS)
Su, Yuanchao; Sun, Xu; Gao, Lianru; Li, Jun; Zhang, Bing
2016-10-01
Endmember extraction is a key step in hyperspectral unmixing. A new endmember extraction framework is proposed for hyperspectral endmember extraction. The proposed approach is based on the swarm intelligence (SI) algorithm, where discretization is used to solve the SI algorithm because pixels in a hyperspectral image are naturally defined within a discrete space. Moreover, a "distance" factor is introduced into the objective function to limit the endmember numbers which is generally limited in real scenarios, while traditional SI algorithms likely produce superabundant spectral signatures, which generally belong to the same classes. Three endmember extraction methods are proposed based on the artificial bee colony, ant colony optimization, and particle swarm optimization algorithms. Experiments with both simulated and real hyperspectral images indicate that the proposed framework can improve the accuracy of endmember extraction.
NASA Astrophysics Data System (ADS)
Wu, Yuanfeng; Gao, Lianru; Zhang, Bing; Zhao, Haina; Li, Jun
2014-01-01
We present a parallel implementation of the optimized maximum noise fraction (G-OMNF) transform algorithm for feature extraction of hyperspectral images on commodity graphics processing units (GPUs). The proposed approach explored the algorithm data-level concurrency and optimized the computing flow. We first defined a three-dimensional grid, in which each thread calculates a sub-block data to easily facilitate the spatial and spectral neighborhood data searches in noise estimation, which is one of the most important steps involved in OMNF. Then, we optimized the processing flow and computed the noise covariance matrix before computing the image covariance matrix to reduce the original hyperspectral image data transmission. These optimization strategies can greatly improve the computing efficiency and can be applied to other feature extraction algorithms. The proposed parallel feature extraction algorithm was implemented on an Nvidia Tesla GPU using the compute unified device architecture and basic linear algebra subroutines library. Through the experiments on several real hyperspectral images, our GPU parallel implementation provides a significant speedup of the algorithm compared with the CPU implementation, especially for highly data parallelizable and arithmetically intensive algorithm parts, such as noise estimation. In order to further evaluate the effectiveness of G-OMNF, we used two different applications: spectral unmixing and classification for evaluation. Considering the sensor scanning rate and the data acquisition time, the proposed parallel implementation met the on-board real-time feature extraction.
NASA Astrophysics Data System (ADS)
Ghulam Saber, Md; Arif Shahriar, Kh; Ahmed, Ashik; Hasan Sagor, Rakibul
2016-10-01
Particle swarm optimization (PSO) and invasive weed optimization (IWO) algorithms are used for extracting the modeling parameters of materials useful for optics and photonics research community. These two bio-inspired algorithms are used here for the first time in this particular field to the best of our knowledge. The algorithms are used for modeling graphene oxide and the performances of the two are compared. Two objective functions are used for different boundary values. Root mean square (RMS) deviation is determined and compared.
NEUTRON STAR MASS–RADIUS CONSTRAINTS USING EVOLUTIONARY OPTIMIZATION
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stevens, A. L.; Morsink, S. M.; Fiege, J. D.
The equation of state of cold supra-nuclear-density matter, such as in neutron stars, is an open question in astrophysics. A promising method for constraining the neutron star equation of state is modeling pulse profiles of thermonuclear X-ray burst oscillations from hot spots on accreting neutron stars. The pulse profiles, constructed using spherical and oblate neutron star models, are comparable to what would be observed by a next-generation X-ray timing instrument like ASTROSAT , NICER , or a mission similar to LOFT . In this paper, we showcase the use of an evolutionary optimization algorithm to fit pulse profiles to determinemore » the best-fit masses and radii. By fitting synthetic data, we assess how well the optimization algorithm can recover the input parameters. Multiple Poisson realizations of the synthetic pulse profiles, constructed with 1.6 million counts and no background, were fitted with the Ferret algorithm to analyze both statistical and degeneracy-related uncertainty and to explore how the goodness of fit depends on the input parameters. For the regions of parameter space sampled by our tests, the best-determined parameter is the projected velocity of the spot along the observer’s line of sight, with an accuracy of ≤3% compared to the true value and with ≤5% statistical uncertainty. The next best determined are the mass and radius; for a neutron star with a spin frequency of 600 Hz, the best-fit mass and radius are accurate to ≤5%, with respective uncertainties of ≤7% and ≤10%. The accuracy and precision depend on the observer inclination and spot colatitude, with values of ∼1% achievable in mass and radius if both the inclination and colatitude are ≳60°.« less
Algorithms for optimizing the treatment of depression: making the right decision at the right time.
Adli, M; Rush, A J; Möller, H-J; Bauer, M
2003-11-01
Medication algorithms for the treatment of depression are designed to optimize both treatment implementation and the appropriateness of treatment strategies. Thus, they are essential tools for treating and avoiding refractory depression. Treatment algorithms are explicit treatment protocols that provide specific therapeutic pathways and decision-making tools at critical decision points throughout the treatment process. The present article provides an overview of major projects of algorithm research in the field of antidepressant therapy. The Berlin Algorithm Project and the Texas Medication Algorithm Project (TMAP) compare algorithm-guided treatments with treatment as usual. The Sequenced Treatment Alternatives to Relieve Depression Project (STAR*D) compares different treatment strategies in treatment-resistant patients.
NASA Astrophysics Data System (ADS)
Tan, Xiangli; Yang, Jungang; Deng, Xinpu
2018-04-01
In the process of geometric correction of remote sensing image, occasionally, a large number of redundant control points may result in low correction accuracy. In order to solve this problem, a control points filtering algorithm based on RANdom SAmple Consensus (RANSAC) was proposed. The basic idea of the RANSAC algorithm is that using the smallest data set possible to estimate the model parameters and then enlarge this set with consistent data points. In this paper, unlike traditional methods of geometric correction using Ground Control Points (GCPs), the simulation experiments are carried out to correct remote sensing images, which using visible stars as control points. In addition, the accuracy of geometric correction without Star Control Points (SCPs) optimization is also shown. The experimental results show that the SCPs's filtering method based on RANSAC algorithm has a great improvement on the accuracy of remote sensing image correction.
[Application of genetic algorithm in blending technology for extractions of Cortex Fraxini].
Yang, Ming; Zhou, Yinmin; Chen, Jialei; Yu, Minying; Shi, Xiufeng; Gu, Xijun
2009-10-01
To explore the feasibility of genetic algorithm (GA) on multiple objective blending technology for extractions of Cortex Fraxini. According to that the optimization objective was the combination of fingerprint similarity and the root-mean-square error of multiple key constituents, a new multiple objective optimization model of 10 batches extractions of Cortex Fraxini was built. The blending coefficient was obtained by genetic algorithm. The quality of 10 batches extractions of Cortex Fraxini that after blending was evaluated with the finger print similarity and root-mean-square error as indexes. The quality of 10 batches extractions of Cortex Fraxini that after blending was well improved. Comparing with the fingerprint of the control sample, the similarity was up, but the degree of variation is down. The relative deviation of the key constituents was less than 10%. It is proved that genetic algorithm works well on multiple objective blending technology for extractions of Cortex Fraxini. This method can be a reference to control the quality of extractions of Cortex Fraxini. Genetic algorithm in blending technology for extractions of Chinese medicines is advisable.
A hybrid genetic algorithm for resolving closely spaced objects
NASA Technical Reports Server (NTRS)
Abbott, R. J.; Lillo, W. E.; Schulenburg, N.
1995-01-01
A hybrid genetic algorithm is described for performing the difficult optimization task of resolving closely spaced objects appearing in space based and ground based surveillance data. This application of genetic algorithms is unusual in that it uses a powerful domain-specific operation as a genetic operator. Results of applying the algorithm to real data from telescopic observations of a star field are presented.
A Novel Extraction Approach of Extrinsic and Intrinsic Parameters of InGaAs/GaN pHEMTs
2015-07-01
presented, for the first time, artificial bee colony algorithm is applied to the global-optimization based parameter extraction and a novel intrinsic...conservation of the gate charge is well satisfied which further validates this novel extraction method. Index Terms —InGaAs/GaN pHEMTs, artificial bee ...increase the uniqueness of the extraction. Artificial bee colony (ABC) algorithm is adopted as the optimizer due to its excellent ability to escape
Richardson-Lucy deblurring for the star scene under a thinning motion path
NASA Astrophysics Data System (ADS)
Su, Laili; Shao, Xiaopeng; Wang, Lin; Wang, Haixin; Huang, Yining
2015-05-01
This paper puts emphasis on how to model and correct image blur that arises from a camera's ego motion while observing a distant star scene. Concerning the significance of accurate estimation of point spread function (PSF), a new method is employed to obtain blur kernel by thinning star motion path. In particular, how the blurred star image can be corrected to reconstruct the clear scene with a thinning motion blur model which describes the camera's path is presented. This thinning motion path to build blur kernel model is more effective at modeling the spatially motion blur introduced by camera's ego motion than conventional blind estimation of kernel-based PSF parameterization. To gain the reconstructed image, firstly, an improved thinning algorithm is used to obtain the star point trajectory, so as to extract the blur kernel of the motion-blurred star image. Then how motion blur model can be incorporated into the Richardson-Lucy (RL) deblurring algorithm, which reveals its overall effectiveness, is detailed. In addition, compared with the conventional estimated blur kernel, experimental results show that the proposed method of using thinning algorithm to get the motion blur kernel is of less complexity, higher efficiency and better accuracy, which contributes to better restoration of the motion-blurred star images.
Linear feature detection algorithm for astronomical surveys - I. Algorithm description
NASA Astrophysics Data System (ADS)
Bektešević, Dino; Vinković, Dejan
2017-11-01
Computer vision algorithms are powerful tools in astronomical image analyses, especially when automation of object detection and extraction is required. Modern object detection algorithms in astronomy are oriented towards detection of stars and galaxies, ignoring completely the detection of existing linear features. With the emergence of wide-field sky surveys, linear features attract scientific interest as possible trails of fast flybys of near-Earth asteroids and meteors. In this work, we describe a new linear feature detection algorithm designed specifically for implementation in big data astronomy. The algorithm combines a series of algorithmic steps that first remove other objects (stars and galaxies) from the image and then enhance the line to enable more efficient line detection with the Hough algorithm. The rate of false positives is greatly reduced thanks to a step that replaces possible line segments with rectangles and then compares lines fitted to the rectangles with the lines obtained directly from the image. The speed of the algorithm and its applicability in astronomical surveys are also discussed.
Low-Frequency Error Extraction and Compensation for Attitude Measurements from STECE Star Tracker
Lai, Yuwang; Gu, Defeng; Liu, Junhong; Li, Wenping; Yi, Dongyun
2016-01-01
The low frequency errors (LFE) of star trackers are the most penalizing errors for high-accuracy satellite attitude determination. Two test star trackers- have been mounted on the Space Technology Experiment and Climate Exploration (STECE) satellite, a small satellite mission developed by China. To extract and compensate the LFE of the attitude measurements for the two test star trackers, a new approach, called Fourier analysis, combined with the Vondrak filter method (FAVF) is proposed in this paper. Firstly, the LFE of the two test star trackers’ attitude measurements are analyzed and extracted by the FAVF method. The remarkable orbital reproducibility features are found in both of the two test star trackers’ attitude measurements. Then, by using the reproducibility feature of the LFE, the two star trackers’ LFE patterns are estimated effectively. Finally, based on the actual LFE pattern results, this paper presents a new LFE compensation strategy. The validity and effectiveness of the proposed LFE compensation algorithm is demonstrated by the significant improvement in the consistency between the two test star trackers. The root mean square (RMS) of the relative Euler angle residuals are reduced from [27.95′′, 25.14′′, 82.43′′], 3σ to [16.12′′, 15.89′′, 53.27′′], 3σ. PMID:27754320
Low-Frequency Error Extraction and Compensation for Attitude Measurements from STECE Star Tracker.
Lai, Yuwang; Gu, Defeng; Liu, Junhong; Li, Wenping; Yi, Dongyun
2016-10-12
The low frequency errors (LFE) of star trackers are the most penalizing errors for high-accuracy satellite attitude determination. Two test star trackers- have been mounted on the Space Technology Experiment and Climate Exploration (STECE) satellite, a small satellite mission developed by China. To extract and compensate the LFE of the attitude measurements for the two test star trackers, a new approach, called Fourier analysis, combined with the Vondrak filter method (FAVF) is proposed in this paper. Firstly, the LFE of the two test star trackers' attitude measurements are analyzed and extracted by the FAVF method. The remarkable orbital reproducibility features are found in both of the two test star trackers' attitude measurements. Then, by using the reproducibility feature of the LFE, the two star trackers' LFE patterns are estimated effectively. Finally, based on the actual LFE pattern results, this paper presents a new LFE compensation strategy. The validity and effectiveness of the proposed LFE compensation algorithm is demonstrated by the significant improvement in the consistency between the two test star trackers. The root mean square (RMS) of the relative Euler angle residuals are reduced from [27.95'', 25.14'', 82.43''], 3σ to [16.12'', 15.89'', 53.27''], 3σ.
Li, Xiaohong; Zhang, Yuyan
2018-01-01
The ultraviolet spectrophotometric method is often used for determining the content of glycyrrhizic acid from Chinese herbal medicine Glycyrrhiza glabra. Based on the traditional single variable approach, four extraction parameters of ammonia concentration, ethanol concentration, circumfluence time, and liquid-solid ratio are adopted as the independent extraction variables. In the present work, central composite design of four factors and five levels is applied to design the extraction experiments. Subsequently, the prediction models of response surface methodology, artificial neural networks, and genetic algorithm-artificial neural networks are developed to analyze the obtained experimental data, while the genetic algorithm is utilized to find the optimal extraction parameters for the above well-established models. It is found that the optimization of extraction technology is presented as ammonia concentration 0.595%, ethanol concentration 58.45%, return time 2.5 h, and liquid-solid ratio 11.065 : 1. Under these conditions, the model predictive value is 381.24 mg, the experimental average value is 376.46 mg, and the expectation discrepancy is 4.78 mg. For the first time, a comparative study of these three approaches is conducted for the evaluation and optimization of the effects of the extraction independent variables. Furthermore, it is demonstrated that the combinational method of genetic algorithm and artificial neural networks provides a more reliable and more accurate strategy for design and optimization of glycyrrhizic acid extraction from Glycyrrhiza glabra. PMID:29887907
Yu, Li; Jin, Weifeng; Li, Xiaohong; Zhang, Yuyan
2018-01-01
The ultraviolet spectrophotometric method is often used for determining the content of glycyrrhizic acid from Chinese herbal medicine Glycyrrhiza glabra . Based on the traditional single variable approach, four extraction parameters of ammonia concentration, ethanol concentration, circumfluence time, and liquid-solid ratio are adopted as the independent extraction variables. In the present work, central composite design of four factors and five levels is applied to design the extraction experiments. Subsequently, the prediction models of response surface methodology, artificial neural networks, and genetic algorithm-artificial neural networks are developed to analyze the obtained experimental data, while the genetic algorithm is utilized to find the optimal extraction parameters for the above well-established models. It is found that the optimization of extraction technology is presented as ammonia concentration 0.595%, ethanol concentration 58.45%, return time 2.5 h, and liquid-solid ratio 11.065 : 1. Under these conditions, the model predictive value is 381.24 mg, the experimental average value is 376.46 mg, and the expectation discrepancy is 4.78 mg. For the first time, a comparative study of these three approaches is conducted for the evaluation and optimization of the effects of the extraction independent variables. Furthermore, it is demonstrated that the combinational method of genetic algorithm and artificial neural networks provides a more reliable and more accurate strategy for design and optimization of glycyrrhizic acid extraction from Glycyrrhiza glabra .
Information extraction and transmission techniques for spaceborne synthetic aperture radar images
NASA Technical Reports Server (NTRS)
Frost, V. S.; Yurovsky, L.; Watson, E.; Townsend, K.; Gardner, S.; Boberg, D.; Watson, J.; Minden, G. J.; Shanmugan, K. S.
1984-01-01
Information extraction and transmission techniques for synthetic aperture radar (SAR) imagery were investigated. Four interrelated problems were addressed. An optimal tonal SAR image classification algorithm was developed and evaluated. A data compression technique was developed for SAR imagery which is simple and provides a 5:1 compression with acceptable image quality. An optimal textural edge detector was developed. Several SAR image enhancement algorithms have been proposed. The effectiveness of each algorithm was compared quantitatively.
Exposure Time Optimization for Highly Dynamic Star Trackers
Wei, Xinguo; Tan, Wei; Li, Jian; Zhang, Guangjun
2014-01-01
Under highly dynamic conditions, the star-spots on the image sensor of a star tracker move across many pixels during the exposure time, which will reduce star detection sensitivity and increase star location errors. However, this kind of effect can be compensated well by setting an appropriate exposure time. This paper focuses on how exposure time affects the star tracker under highly dynamic conditions and how to determine the most appropriate exposure time for this case. Firstly, the effect of exposure time on star detection sensitivity is analyzed by establishing the dynamic star-spot imaging model. Then the star location error is deduced based on the error analysis of the sub-pixel centroiding algorithm. Combining these analyses, the effect of exposure time on attitude accuracy is finally determined. Some simulations are carried out to validate these effects, and the results show that there are different optimal exposure times for different angular velocities of a star tracker with a given configuration. In addition, the results of night sky experiments using a real star tracker agree with the simulation results. The summarized regularities in this paper should prove helpful in the system design and dynamic performance evaluation of the highly dynamic star trackers. PMID:24618776
Attitude Determination Using Two Vector Measurements
NASA Technical Reports Server (NTRS)
Markley, F. Landis
1998-01-01
Many spacecraft attitude determination methods use exactly two vector measurements. The two vectors are typically the unit vector to the Sun and the Earth's magnetic field vector for coarse "sun-mag" attitude determination or unit vectors to two stars tracked by two star trackers for fine attitude determination. TRIAD, the earliest published algorithm for determining spacecraft attitude from two vector measurements, has been widely used in both ground-based and onboard attitude determination. Later attitude determination methods have been based on Wahba's optimality criterion for n arbitrarily weighted observations. The solution of Wahba's problem is somewhat difficult in the general case, but there is a simple closed-form solution in the two-observation case. This solution reduces to the TRIAD solution for certain choices of measurement weights. This paper presents and compares these algorithms as well as sub-optimal algorithms proposed by Bar-Itzhack, Harman, and Reynolds. Some new results will be presented, but the paper is primarily a review and tutorial.
Optimization of coronagraph design for segmented aperture telescopes
NASA Astrophysics Data System (ADS)
Jewell, Jeffrey; Ruane, Garreth; Shaklan, Stuart; Mawet, Dimitri; Redding, Dave
2017-09-01
The goal of directly imaging Earth-like planets in the habitable zone of other stars has motivated the design of coronagraphs for use with large segmented aperture space telescopes. In order to achieve an optimal trade-off between planet light throughput and diffracted starlight suppression, we consider coronagraphs comprised of a stage of phase control implemented with deformable mirrors (or other optical elements), pupil plane apodization masks (gray scale or complex valued), and focal plane masks (either amplitude only or complex-valued, including phase only such as the vector vortex coronagraph). The optimization of these optical elements, with the goal of achieving 10 or more orders of magnitude in the suppression of on-axis (starlight) diffracted light, represents a challenging non-convex optimization problem with a nonlinear dependence on control degrees of freedom. We develop a new algorithmic approach to the design optimization problem, which we call the "Auxiliary Field Optimization" (AFO) algorithm. The central idea of the algorithm is to embed the original optimization problem, for either phase or amplitude (apodization) in various planes of the coronagraph, into a problem containing additional degrees of freedom, specifically fictitious "auxiliary" electric fields which serve as targets to inform the variation of our phase or amplitude parameters leading to good feasible designs. We present the algorithm, discuss details of its numerical implementation, and prove convergence to local minima of the objective function (here taken to be the intensity of the on-axis source in a "dark hole" region in the science focal plane). Finally, we present results showing application of the algorithm to both unobscured off-axis and obscured on-axis segmented telescope aperture designs. The application of the AFO algorithm to the coronagraph design problem has produced solutions which are capable of directly imaging planets in the habitable zone, provided end-to-end telescope system stability requirements can be met. Ongoing work includes advances of the AFO algorithm reported here to design in additional robustness to a resolved star, and other phase or amplitude aberrations to be encountered in a real segmented aperture space telescope.
CONCAM's Fuzzy-Logic All-Sky Star Recognition Algorithm
NASA Astrophysics Data System (ADS)
Shamir, L.; Nemiroff, R. J.
2004-05-01
One of the purposes of the global Night Sky Live (NSL) network of fisheye CONtinuous CAMeras (CONCAMs) is to monitor and archive the entire bright night sky, track stellar variability, and search for transients. The high quality of raw CONCAM data allows automation of stellar object recognition, although distortions of the fisheye lens and frequent slight shifts in CONCAM orientations can make even this seemingly simple task formidable. To meet this challenge, a fuzzy logic based algorithm has been developed that transforms (x,y) image coordinates in the CCD frame into fuzzy right ascension and declination coordinates for use in matching with star catalogs. Using a training set of reference stars, the algorithm statically builds the fuzzy logic model. At runtime, the algorithm searches for peaks, and then applies the fuzzy logic model to perform the coordinate transformation before choosing the optimal star catalog match. The present fuzzy-logic algorithm works much better than our first generation, straightforward coordinate transformation formula. Following this essential step, algorithms dealing with the higher level data products can then provide a stream of photometry for a few hundred stellar objects visible in the night sky. Accurate photometry further enables the computation of all-sky maps of skyglow and opacity, as well as a search for uncataloged transients. All information is stored in XML-like tagged ASCII files that are instantly copied to the public domain and available at http://NightSkyLive.net. Currently, the NSL software detects stars and creates all-sky image files from eight different locations around the globe every 3 minutes and 56 seconds.
NASA Technical Reports Server (NTRS)
Markley, F. Landis; Cheng, Yang; Crassidis, John L.; Oshman, Yaakov
2007-01-01
Many applications require an algorithm that averages quaternions in an optimal manner. For example, when combining the quaternion outputs of multiple star trackers having this output capability, it is desirable to properly average the quaternions without recomputing the attitude from the the raw star tracker data. Other applications requiring some sort of optimal quaternion averaging include particle filtering and multiple-model adaptive estimation, where weighted quaternions are used to determine the quaternion estimate. For spacecraft attitude estimation applications, derives an optimal averaging scheme to compute the average of a set of weighted attitude matrices using the singular value decomposition method. Focusing on a 4-dimensional quaternion Gaussian distribution on the unit hypersphere, provides an approach to computing the average quaternion by minimizing a quaternion cost function that is equivalent to the attitude matrix cost function Motivated by and extending its results, this Note derives an algorithm that deterniines an optimal average quaternion from a set of scalar- or matrix-weighted quaternions. Rirthermore, a sufficient condition for the uniqueness of the average quaternion, and the equivalence of the mininiization problem, stated herein, to maximum likelihood estimation, are shown.
Application of genetic algorithm in modeling on-wafer inductors for up to 110 Ghz
NASA Astrophysics Data System (ADS)
Liu, Nianhong; Fu, Jun; Liu, Hui; Cui, Wenpu; Liu, Zhihong; Liu, Linlin; Zhou, Wei; Wang, Quan; Guo, Ao
2018-05-01
In this work, the genetic algorithm has been introducted into parameter extraction for on-wafer inductors for up to 110 GHz millimeter-wave operations, and nine independent parameters of the equivalent circuit model are optimized together. With the genetic algorithm, the model with the optimized parameters gives a better fitting accuracy than the preliminary parameters without optimization. Especially, the fitting accuracy of the Q value achieves a significant improvement after the optimization.
Extracting TSK-type Neuro-Fuzzy model using the Hunting search algorithm
NASA Astrophysics Data System (ADS)
Bouzaida, Sana; Sakly, Anis; M'Sahli, Faouzi
2014-01-01
This paper proposes a Takagi-Sugeno-Kang (TSK) type Neuro-Fuzzy model tuned by a novel metaheuristic optimization algorithm called Hunting Search (HuS). The HuS algorithm is derived based on a model of group hunting of animals such as lions, wolves, and dolphins when looking for a prey. In this study, the structure and parameters of the fuzzy model are encoded into a particle. Thus, the optimal structure and parameters are achieved simultaneously. The proposed method was demonstrated through modeling and control problems, and the results have been compared with other optimization techniques. The comparisons indicate that the proposed method represents a powerful search approach and an effective optimization technique as it can extract the accurate TSK fuzzy model with an appropriate number of rules.
Multiobjective optimization of temporal processes.
Song, Zhe; Kusiak, Andrew
2010-06-01
This paper presents a dynamic predictive-optimization framework of a nonlinear temporal process. Data-mining (DM) and evolutionary strategy algorithms are integrated in the framework for solving the optimization model. DM algorithms learn dynamic equations from the process data. An evolutionary strategy algorithm is then applied to solve the optimization problem guided by the knowledge extracted by the DM algorithm. The concept presented in this paper is illustrated with the data from a power plant, where the goal is to maximize the boiler efficiency and minimize the limestone consumption. This multiobjective optimization problem can be either transformed into a single-objective optimization problem through preference aggregation approaches or into a Pareto-optimal optimization problem. The computational results have shown the effectiveness of the proposed optimization framework.
An algorithm for automatic parameter adjustment for brain extraction in BrainSuite
NASA Astrophysics Data System (ADS)
Rajagopal, Gautham; Joshi, Anand A.; Leahy, Richard M.
2017-02-01
Brain Extraction (classification of brain and non-brain tissue) of MRI brain images is a crucial pre-processing step necessary for imaging-based anatomical studies of the human brain. Several automated methods and software tools are available for performing this task, but differences in MR image parameters (pulse sequence, resolution) and instrumentand subject-dependent noise and artefacts affect the performance of these automated methods. We describe and evaluate a method that automatically adapts the default parameters of the Brain Surface Extraction (BSE) algorithm to optimize a cost function chosen to reflect accurate brain extraction. BSE uses a combination of anisotropic filtering, Marr-Hildreth edge detection, and binary morphology for brain extraction. Our algorithm automatically adapts four parameters associated with these steps to maximize the brain surface area to volume ratio. We evaluate the method on a total of 109 brain volumes with ground truth brain masks generated by an expert user. A quantitative evaluation of the performance of the proposed algorithm showed an improvement in the mean (s.d.) Dice coefficient from 0.8969 (0.0376) for default parameters to 0.9509 (0.0504) for the optimized case. These results indicate that automatic parameter optimization can result in significant improvements in definition of the brain mask.
Wang, Jie-sheng; Han, Shuang; Shen, Na-na; Li, Shu-xia
2014-01-01
For meeting the forecasting target of key technology indicators in the flotation process, a BP neural network soft-sensor model based on features extraction of flotation froth images and optimized by shuffled cuckoo search algorithm is proposed. Based on the digital image processing technique, the color features in HSI color space, the visual features based on the gray level cooccurrence matrix, and the shape characteristics based on the geometric theory of flotation froth images are extracted, respectively, as the input variables of the proposed soft-sensor model. Then the isometric mapping method is used to reduce the input dimension, the network size, and learning time of BP neural network. Finally, a shuffled cuckoo search algorithm is adopted to optimize the BP neural network soft-sensor model. Simulation results show that the model has better generalization results and prediction accuracy. PMID:25133210
Genetic algorithms used for the optimization of light-emitting diodes and solar thermal collectors
NASA Astrophysics Data System (ADS)
Mayer, Alexandre; Bay, Annick; Gaouyat, Lucie; Nicolay, Delphine; Carletti, Timoteo; Deparis, Olivier
2014-09-01
We present a genetic algorithm (GA) we developed for the optimization of light-emitting diodes (LED) and solar thermal collectors. The surface of a LED can be covered by periodic structures whose geometrical and material parameters must be adjusted in order to maximize the extraction of light. The optimization of these parameters by the GA enabled us to get a light-extraction efficiency η of 11.0% from a GaN LED (for comparison, the flat material has a light-extraction efficiency η of only 3.7%). The solar thermal collector we considered consists of a waffle-shaped Al substrate with NiCrOx and SnO2 conformal coatings. We must in this case maximize the solar absorption α while minimizing the thermal emissivity ɛ in the infrared. A multi-objective genetic algorithm has to be implemented in this case in order to determine optimal geometrical parameters. The parameters we obtained using the multi-objective GA enable α~97.8% and ɛ~4.8%, which improves results achieved previously when considering a flat substrate. These two applications demonstrate the interest of genetic algorithms for addressing complex problems in physics.
The cool-star spectral catalog: A uniform collection of IUE SWP-LOs
NASA Technical Reports Server (NTRS)
Ayres, T.; Lenz, D.; Burton, R.; Bennett, J.
1992-01-01
Over the past decade and a half of its operations, the International Ultraviolet Explorer has recorded low-dispersion spectrograms in the 1150-2000 A interval of more than 800 stars of late spectral type (F-M). The sub-2000 A region contains a number of emission lines that are key diagnostics of physical conditions in the high-excitation chromospheres and subcoronal 'transition zones' of such stars. Many of the sources have been observed a number of times, and the available collection of SWP-LO exposures in the IUE Archives exceeds 4,000. With support from the Astrophysics Data Program, we have assembled the archival material into a catalog of IUE far-UV fluxes of late-type stars. In order to ensure uniform processing of the spectra, we: (1) photometrically corrected the raw vidicon images with a custom version of the 1985 SWP ITF; (2) identified and eliminated, sharp cosmic-ray 'hits' by means of a spatial filter; (3) extracted the spectral traces with the 'optimal' (weighted-slit) strategy; and (4) calibrated them against a well-characterized reference source, the DA white dwarf G191-B2B. Our approach is similar to that adopted by the IUE Project for its 'Final Archive', but our implementation is specialized to the case of chromospheric emission-line sources. We measured the resulting SWP-LO spectra using a semi-autonomous algorithm that establishes a smooth continuum by numerical filtering, and then fits the significant emissions (or absorptions) by means of a constrained Bevington-type multiple-Gaussian procedure. The algorithm assigns errors to the fitted fluxes - or upper limits in the absence of a significant detection - according to a model based on careful measurements of the noise properties of the IUE's intensified SEC cameras. Here, we describe the 'visualization' strategies we adopted to ensure human-review of the semi-autonomous processing and measuring algorithms; the derivation of the noise model and the assignment of errors; and the structure of the final catalog as delivered to the Astrophysics Data System.
2014-09-01
to develop an optimized system design and associated image reconstruction algorithms for a hybrid three-dimensional (3D) breast imaging system that...research is to develop an optimized system design and associated image reconstruction algorithms for a hybrid three-dimensional (3D) breast imaging ...i) developed time-of- flight extraction algorithms to perform USCT, (ii) developing image reconstruction algorithms for USCT, (iii) developed
The Polychromatic Laser Guide Star: the ELP-OA demonstrator at Observatoire de Haute Provence
NASA Astrophysics Data System (ADS)
Foy, R.; Chatagnat, M.; Dubet, D.; Éric, P.; Eysseric, J.; Foy, F.-C.; Fusco, T.; Girard, J.; Laloge, A.; Le van Suu, A.; Messaoudi, B.; Perruchot, S.; Richaud, P.; Richaud, Y.; Rondeau, X.; Tallon, M.; Thiébaut, É.; Boër, M.
2007-07-01
The correction of the tilt for adaptive optics devices from the only laser guide star can be done with the polychromatic laser guide star. We report the progress of the first demonstrator of the implementation of this concept, at Observatoire de Haute-Provence. We review the last steps of the feasibility studies, the optimization of the laser parameters, and the studies of the implementation at the OHP 1.52m telescope, including the beam propagation to the lasers room to the mesosphere and the algorithms for tip-tilt measurements.
Optimized principal component analysis on coronagraphic images of the fomalhaut system
DOE Office of Scientific and Technical Information (OSTI.GOV)
Meshkat, Tiffany; Kenworthy, Matthew A.; Quanz, Sascha P.
We present the results of a study to optimize the principal component analysis (PCA) algorithm for planet detection, a new algorithm complementing angular differential imaging and locally optimized combination of images (LOCI) for increasing the contrast achievable next to a bright star. The stellar point spread function (PSF) is constructed by removing linear combinations of principal components, allowing the flux from an extrasolar planet to shine through. The number of principal components used determines how well the stellar PSF is globally modeled. Using more principal components may decrease the number of speckles in the final image, but also increases themore » background noise. We apply PCA to Fomalhaut Very Large Telescope NaCo images acquired at 4.05 μm with an apodized phase plate. We do not detect any companions, with a model dependent upper mass limit of 13-18 M {sub Jup} from 4-10 AU. PCA achieves greater sensitivity than the LOCI algorithm for the Fomalhaut coronagraphic data by up to 1 mag. We make several adaptations to the PCA code and determine which of these prove the most effective at maximizing the signal-to-noise from a planet very close to its parent star. We demonstrate that optimizing the number of principal components used in PCA proves most effective for pulling out a planet signal.« less
Optimal wavefront estimation of incoherent sources
NASA Astrophysics Data System (ADS)
Riggs, A. J. Eldorado; Kasdin, N. Jeremy; Groff, Tyler
2014-08-01
Direct imaging is in general necessary to characterize exoplanets and disks. A coronagraph is an instrument used to create a dim (high-contrast) region in a star's PSF where faint companions can be detected. All coronagraphic high-contrast imaging systems use one or more deformable mirrors (DMs) to correct quasi-static aberrations and recover contrast in the focal plane. Simulations show that existing wavefront control algorithms can correct for diffracted starlight in just a few iterations, but in practice tens or hundreds of control iterations are needed to achieve high contrast. The discrepancy largely arises from the fact that simulations have perfect knowledge of the wavefront and DM actuation. Thus, wavefront correction algorithms are currently limited by the quality and speed of wavefront estimates. Exposures in space will take orders of magnitude more time than any calculations, so a nonlinear estimation method that needs fewer images but more computational time would be advantageous. In addition, current wavefront correction routines seek only to reduce diffracted starlight. Here we present nonlinear estimation algorithms that include optimal estimation of sources incoherent with a star such as exoplanets and debris disks.
Manual Optical Attitude Re-initialization of a Crew Vehicle in Space Using Bias Corrected Gyro Data
NASA Astrophysics Data System (ADS)
Gioia, Christopher J.
NASA and other space agencies have shown interest in sending humans on missions beyond low Earth orbit. Proposed is an algorithm that estimates the attitude of a manned spacecraft using measured line-of-sight (LOS) vectors to stars and gyroscope measurements. The Manual Optical Attitude Reinitialization (MOAR) algorithm and corresponding device draw inspiration from existing technology from the Gemini, Apollo and Space Shuttle programs. The improvement over these devices is the capability of estimating gyro bias completely independent from re-initializing attitude. It may be applied to the lost-in-space problem, where the spacecraft's attitude is unknown. In this work, a model was constructed that simulated gyro data using the Farrenkopf gyro model, and LOS measurements from a spotting scope were then computed from it. Using these simulated measurements, gyro bias was estimated by comparing measured interior star angles to those derived from a star catalog and then minimizing the difference using an optimization technique. Several optimization techniques were analyzed, and it was determined that the Broyden-Fletcher-Goldfarb-Shanno (BFGS) algorithm performed the best when combined with a grid search technique. Once estimated, the gyro bias was removed and attitude was determined by solving the Wahba Problem via the Singular Value Decomposition (SVD) approach. Several Monte Carlo simulations were performed that looked at different operating conditions for the MOAR algorithm. These included the effects of bias instability, using different constellations for data collection, sampling star measurements in different orders, and varying the time between measurements. A common method of estimating gyro bias and attitude in a Multiplicative Extended Kalman Filter (MEKF) was also explored and disproven for use in the MOAR algorithm. A prototype was also constructed to validate the proposed concepts. It was built using a simple spotting scope, MEMS grade IMU, and a Raspberry Pi computer. It was mounted on a tripod, used to target stars with the scope and measure the rotation between them using the IMU. The raw measurements were then post-processed using the MOAR algorithm, and attitude estimates were determined. Two different constellations---the Big Dipper and Orion---were used for experimental data collection. The results suggest that the novel method of estimating gyro bias independently from attitude in this document is credible for use onboard a spacecraft.
Hydraulic containment: analytical and semi-analytical models for capture zone curve delineation
NASA Astrophysics Data System (ADS)
Christ, John A.; Goltz, Mark N.
2002-05-01
We present an efficient semi-analytical algorithm that uses complex potential theory and superposition to delineate the capture zone curves of extraction wells. This algorithm is more flexible than previously published techniques and allows the user to determine the capture zone for a number of arbitrarily positioned extraction wells pumping at different rates. The algorithm is applied to determine the capture zones and optimal well spacing of two wells pumping at different flow rates and positioned at various orientations to the direction of regional groundwater flow. The algorithm is also applied to determine capture zones for non-colinear three-well configurations as well as to determine optimal well spacing for up to six wells pumping at the same rate. We show that the optimal well spacing is found by minimizing the difference in the stream function evaluated at the stagnation points.
Adaptive Estimation and Heuristic Optimization of Nonlinear Spacecraft Attitude Dynamics
2016-09-15
Algorithm GPS Global Positioning System HOUF Higher Order Unscented Filter IC initial conditions IMM Interacting Multiple Model IMU Inertial Measurement Unit ...sources ranging from inertial measurement units to star sensors are used to construct observations for attitude estimation algorithms. The sensor...parameters. A single vector measurement will provide two independent parameters, as a unit vector constraint removes a DOF making the problem underdetermined
Cihan, Abdullah; Birkholzer, Jens; Bianchi, Marco
2014-12-31
Large-scale pressure increases resulting from carbon dioxide (CO 2) injection in the subsurface can potentially impact caprock integrity, induce reactivation of critically stressed faults, and drive CO 2 or brine through conductive features into shallow groundwater. Pressure management involving the extraction of native fluids from storage formations can be used to minimize pressure increases while maximizing CO2 storage. However, brine extraction requires pumping, transportation, possibly treatment, and disposal of substantial volumes of extracted brackish or saline water, all of which can be technically challenging and expensive. This paper describes a constrained differential evolution (CDE) algorithm for optimal well placement andmore » injection/ extraction control with the goal of minimizing brine extraction while achieving predefined pressure contraints. The CDE methodology was tested for a simple optimization problem whose solution can be partially obtained with a gradient-based optimization methodology. The CDE successfully estimated the true global optimum for both extraction well location and extraction rate, needed for the test problem. A more complex example application of the developed strategy was also presented for a hypothetical CO 2 storage scenario in a heterogeneous reservoir consisting of a critically stressed fault nearby an injection zone. Through the CDE optimization algorithm coupled to a numerical vertically-averaged reservoir model, we successfully estimated optimal rates and locations for CO 2 injection and brine extraction wells while simultaneously satisfying multiple pressure buildup constraints to avoid fault activation and caprock fracturing. The study shows that the CDE methodology is a very promising tool to solve also other optimization problems related to GCS, such as reducing ‘Area of Review’, monitoring design, reducing risk of leakage and increasing storage capacity and trapping.« less
NASA Astrophysics Data System (ADS)
Gamshadzaei, Mohammad Hossein; Rahimzadegan, Majid
2017-10-01
Identification of water extents in Landsat images is challenging due to surfaces with similar reflectance to water extents. The objective of this study is to provide stable and accurate methods for identifying water extents in Landsat images based on meta-heuristic algorithms. Then, seven Landsat images were selected from various environmental regions in Iran. Training of the algorithms was performed using 40 water pixels and 40 nonwater pixels in operational land imager images of Chitgar Lake (one of the study regions). Moreover, high-resolution images from Google Earth were digitized to evaluate the results. Two approaches were considered: index-based and artificial intelligence (AI) algorithms. In the first approach, nine common water spectral indices were investigated. AI algorithms were utilized to acquire coefficients of optimal band combinations to extract water extents. Among the AI algorithms, the artificial neural network algorithm and also the ant colony optimization, genetic algorithm, and particle swarm optimization (PSO) meta-heuristic algorithms were implemented. Index-based methods represented different performances in various regions. Among AI methods, PSO had the best performance with average overall accuracy and kappa coefficient of 93% and 98%, respectively. The results indicated the applicability of acquired band combinations to extract accurately and stably water extents in Landsat imagery.
Obtaining the phase in the star test using genetic algorithms
NASA Astrophysics Data System (ADS)
Salazar Romero, Marcos A.; Vazquez-Montiel, Sergio; Cornejo-Rodriguez, Alejandro
2004-10-01
The star test is conceptually perhaps the most basic and simplest of all methods of testing image-forming optical systems, the irradiance distribution at the image of a point source (such as a star) is give for the Point Spread Function, PSF. The PSF is very sensitive to aberrations. One way to quantify the PSF is measuring the irradiance distribution on the image of the source point. On the other hand, if we know the aberrations introduced by the optical systems and utilizing the diffraction theory then we can calculate the PSF. In this work we propose a method in order to find the wavefront aberrations starting from the PSF, transforming the problem of fitting a polynomial of aberrations in a problem of optimization using Genetic Algorithm. Also, we show that this method is immune to the noise introduced in the register or recording of the image. Results of these methods are shown.
New knowledge-based genetic algorithm for excavator boom structural optimization
NASA Astrophysics Data System (ADS)
Hua, Haiyan; Lin, Shuwen
2014-03-01
Due to the insufficiency of utilizing knowledge to guide the complex optimal searching, existing genetic algorithms fail to effectively solve excavator boom structural optimization problem. To improve the optimization efficiency and quality, a new knowledge-based real-coded genetic algorithm is proposed. A dual evolution mechanism combining knowledge evolution with genetic algorithm is established to extract, handle and utilize the shallow and deep implicit constraint knowledge to guide the optimal searching of genetic algorithm circularly. Based on this dual evolution mechanism, knowledge evolution and population evolution can be connected by knowledge influence operators to improve the configurability of knowledge and genetic operators. Then, the new knowledge-based selection operator, crossover operator and mutation operator are proposed to integrate the optimal process knowledge and domain culture to guide the excavator boom structural optimization. Eight kinds of testing algorithms, which include different genetic operators, are taken as examples to solve the structural optimization of a medium-sized excavator boom. By comparing the results of optimization, it is shown that the algorithm including all the new knowledge-based genetic operators can more remarkably improve the evolutionary rate and searching ability than other testing algorithms, which demonstrates the effectiveness of knowledge for guiding optimal searching. The proposed knowledge-based genetic algorithm by combining multi-level knowledge evolution with numerical optimization provides a new effective method for solving the complex engineering optimization problem.
An Autonomous Star Identification Algorithm Based on One-Dimensional Vector Pattern for Star Sensors
Luo, Liyan; Xu, Luping; Zhang, Hua
2015-01-01
In order to enhance the robustness and accelerate the recognition speed of star identification, an autonomous star identification algorithm for star sensors is proposed based on the one-dimensional vector pattern (one_DVP). In the proposed algorithm, the space geometry information of the observed stars is used to form the one-dimensional vector pattern of the observed star. The one-dimensional vector pattern of the same observed star remains unchanged when the stellar image rotates, so the problem of star identification is simplified as the comparison of the two feature vectors. The one-dimensional vector pattern is adopted to build the feature vector of the star pattern, which makes it possible to identify the observed stars robustly. The characteristics of the feature vector and the proposed search strategy for the matching pattern make it possible to achieve the recognition result as quickly as possible. The simulation results demonstrate that the proposed algorithm can effectively accelerate the star identification. Moreover, the recognition accuracy and robustness by the proposed algorithm are better than those by the pyramid algorithm, the modified grid algorithm, and the LPT algorithm. The theoretical analysis and experimental results show that the proposed algorithm outperforms the other three star identification algorithms. PMID:26198233
Luo, Liyan; Xu, Luping; Zhang, Hua
2015-07-07
In order to enhance the robustness and accelerate the recognition speed of star identification, an autonomous star identification algorithm for star sensors is proposed based on the one-dimensional vector pattern (one_DVP). In the proposed algorithm, the space geometry information of the observed stars is used to form the one-dimensional vector pattern of the observed star. The one-dimensional vector pattern of the same observed star remains unchanged when the stellar image rotates, so the problem of star identification is simplified as the comparison of the two feature vectors. The one-dimensional vector pattern is adopted to build the feature vector of the star pattern, which makes it possible to identify the observed stars robustly. The characteristics of the feature vector and the proposed search strategy for the matching pattern make it possible to achieve the recognition result as quickly as possible. The simulation results demonstrate that the proposed algorithm can effectively accelerate the star identification. Moreover, the recognition accuracy and robustness by the proposed algorithm are better than those by the pyramid algorithm, the modified grid algorithm, and the LPT algorithm. The theoretical analysis and experimental results show that the proposed algorithm outperforms the other three star identification algorithms.
Boon, K H; Khalil-Hani, M; Malarvili, M B
2018-01-01
This paper presents a method that able to predict the paroxysmal atrial fibrillation (PAF). The method uses shorter heart rate variability (HRV) signals when compared to existing methods, and achieves good prediction accuracy. PAF is a common cardiac arrhythmia that increases the health risk of a patient, and the development of an accurate predictor of the onset of PAF is clinical important because it increases the possibility to electrically stabilize and prevent the onset of atrial arrhythmias with different pacing techniques. We propose a multi-objective optimization algorithm based on the non-dominated sorting genetic algorithm III for optimizing the baseline PAF prediction system, that consists of the stages of pre-processing, HRV feature extraction, and support vector machine (SVM) model. The pre-processing stage comprises of heart rate correction, interpolation, and signal detrending. After that, time-domain, frequency-domain, non-linear HRV features are extracted from the pre-processed data in feature extraction stage. Then, these features are used as input to the SVM for predicting the PAF event. The proposed optimization algorithm is used to optimize the parameters and settings of various HRV feature extraction algorithms, select the best feature subsets, and tune the SVM parameters simultaneously for maximum prediction performance. The proposed method achieves an accuracy rate of 87.7%, which significantly outperforms most of the previous works. This accuracy rate is achieved even with the HRV signal length being reduced from the typical 30 min to just 5 min (a reduction of 83%). Furthermore, another significant result is the sensitivity rate, which is considered more important that other performance metrics in this paper, can be improved with the trade-off of lower specificity. Copyright © 2017 Elsevier B.V. All rights reserved.
Fault Detection of Bearing Systems through EEMD and Optimization Algorithm
Lee, Dong-Han; Ahn, Jong-Hyo; Koh, Bong-Hwan
2017-01-01
This study proposes a fault detection and diagnosis method for bearing systems using ensemble empirical mode decomposition (EEMD) based feature extraction, in conjunction with particle swarm optimization (PSO), principal component analysis (PCA), and Isomap. First, a mathematical model is assumed to generate vibration signals from damaged bearing components, such as the inner-race, outer-race, and rolling elements. The process of decomposing vibration signals into intrinsic mode functions (IMFs) and extracting statistical features is introduced to develop a damage-sensitive parameter vector. Finally, PCA and Isomap algorithm are used to classify and visualize this parameter vector, to separate damage characteristics from healthy bearing components. Moreover, the PSO-based optimization algorithm improves the classification performance by selecting proper weightings for the parameter vector, to maximize the visualization effect of separating and grouping of parameter vectors in three-dimensional space. PMID:29143772
NASA Astrophysics Data System (ADS)
Maboudi, Mehdi; Amini, Jalal; Malihi, Shirin; Hahn, Michael
2018-04-01
Updated road network as a crucial part of the transportation database plays an important role in various applications. Thus, increasing the automation of the road extraction approaches from remote sensing images has been the subject of extensive research. In this paper, we propose an object based road extraction approach from very high resolution satellite images. Based on the object based image analysis, our approach incorporates various spatial, spectral, and textural objects' descriptors, the capabilities of the fuzzy logic system for handling the uncertainties in road modelling, and the effectiveness and suitability of ant colony algorithm for optimization of network related problems. Four VHR optical satellite images which are acquired by Worldview-2 and IKONOS satellites are used in order to evaluate the proposed approach. Evaluation of the extracted road networks shows that the average completeness, correctness, and quality of the results can reach 89%, 93% and 83% respectively, indicating that the proposed approach is applicable for urban road extraction. We also analyzed the sensitivity of our algorithm to different ant colony optimization parameter values. Comparison of the achieved results with the results of four state-of-the-art algorithms and quantifying the robustness of the fuzzy rule set demonstrate that the proposed approach is both efficient and transferable to other comparable images.
CUTEX: CUrvature Thresholding EXtractor
NASA Astrophysics Data System (ADS)
Molinari, S.; Schisano, E.; Faustini, F.; Pestalozzi, M.; di Giorgio, A. M.; Liu, S.
2017-08-01
CuTEx analyzes images in the infrared bands and extracts sources from complex backgrounds, particularly star-forming regions that offer the challenges of crowding, having a highly spatially variable background, and having no-psf profiles such as protostars in their accreting phase. The code is composed of two main algorithms, the first an algorithm for source detection, and the second for flux extraction. The code is originally written in IDL language and it was exported in the license free GDL language. CuTEx could be used in other bands or in scientific cases different from the native case. This software is also available as an on-line tool from the Multi-Mission Interactive Archive web pages dedicated to the Herschel Observatory.
Research of facial feature extraction based on MMC
NASA Astrophysics Data System (ADS)
Xue, Donglin; Zhao, Jiufen; Tang, Qinhong; Shi, Shaokun
2017-07-01
Based on the maximum margin criterion (MMC), a new algorithm of statistically uncorrelated optimal discriminant vectors and a new algorithm of orthogonal optimal discriminant vectors for feature extraction were proposed. The purpose of the maximum margin criterion is to maximize the inter-class scatter while simultaneously minimizing the intra-class scatter after the projection. Compared with original MMC method and principal component analysis (PCA) method, the proposed methods are better in terms of reducing or eliminating the statistically correlation between features and improving recognition rate. The experiment results on Olivetti Research Laboratory (ORL) face database shows that the new feature extraction method of statistically uncorrelated maximum margin criterion (SUMMC) are better in terms of recognition rate and stability. Besides, the relations between maximum margin criterion and Fisher criterion for feature extraction were revealed.
Advanced Optimal Extraction for the Spitzer/IRS
NASA Astrophysics Data System (ADS)
Lebouteiller, V.; Bernard-Salas, J.; Sloan, G. C.; Barry, D. J.
2010-02-01
We present new advances in the spectral extraction of pointlike sources adapted to the Infrared Spectrograph (IRS) on board the Spitzer Space Telescope. For the first time, we created a supersampled point-spread function of the low-resolution modules. We describe how to use the point-spread function to perform optimal extraction of a single source and of multiple sources within the slit. We also examine the case of the optimal extraction of one or several sources with a complex background. The new algorithms are gathered in a plug-in called AdOpt which is part of the SMART data analysis software.
Grain Propellant Optimization Using Real Code Genetic Algorithm (RCGA)
NASA Astrophysics Data System (ADS)
Farizi, Muhammad Farraz Al; Oktovianus Bura, Romie; Fajar Junjunan, Soleh; Jihad, Bagus H.
2018-04-01
Grain propellant design is important in rocket motor design. The total impulse and ISP of the rocket motor is influenced by the grain propellant design. One way to get a grain propellant shape that generates the maximum total impulse value is to use the Real Code Genetic Algorithm (RCGA) method. In this paper RCGA is applied to star grain Rx-450. To find burn area of propellant used analytical method. While the combustion chamber pressures are sought with zero-dimensional equations. The optimization result can reach the desired target and increase the total impulse value by 3.3% from the initial design of Rx-450.
Integrated feature extraction and selection for neuroimage classification
NASA Astrophysics Data System (ADS)
Fan, Yong; Shen, Dinggang
2009-02-01
Feature extraction and selection are of great importance in neuroimage classification for identifying informative features and reducing feature dimensionality, which are generally implemented as two separate steps. This paper presents an integrated feature extraction and selection algorithm with two iterative steps: constrained subspace learning based feature extraction and support vector machine (SVM) based feature selection. The subspace learning based feature extraction focuses on the brain regions with higher possibility of being affected by the disease under study, while the possibility of brain regions being affected by disease is estimated by the SVM based feature selection, in conjunction with SVM classification. This algorithm can not only take into account the inter-correlation among different brain regions, but also overcome the limitation of traditional subspace learning based feature extraction methods. To achieve robust performance and optimal selection of parameters involved in feature extraction, selection, and classification, a bootstrapping strategy is used to generate multiple versions of training and testing sets for parameter optimization, according to the classification performance measured by the area under the ROC (receiver operating characteristic) curve. The integrated feature extraction and selection method is applied to a structural MR image based Alzheimer's disease (AD) study with 98 non-demented and 100 demented subjects. Cross-validation results indicate that the proposed algorithm can improve performance of the traditional subspace learning based classification.
WFIRST: Exoplanet Target Selection and Scheduling with Greedy Optimization
NASA Astrophysics Data System (ADS)
Keithly, Dean; Garrett, Daniel; Delacroix, Christian; Savransky, Dmitry
2018-01-01
We present target selection and scheduling algorithms for missions with direct imaging of exoplanets, and the Wide Field Infrared Survey Telescope (WFIRST) in particular, which will be equipped with a coronagraphic instrument (CGI). Optimal scheduling of CGI targets can maximize the expected value of directly imaged exoplanets (completeness). Using target completeness as a reward metric and integration time plus overhead time as a cost metric, we can maximize the sum completeness for a mission with a fixed duration. We optimize over these metrics to create a list of target stars using a greedy optimization algorithm based off altruistic yield optimization (AYO) under ideal conditions. We simulate full missions using EXOSIMS by observing targets in this list for their predetermined integration times. In this poster, we report the theoretical maximum sum completeness, mean number of detected exoplanets from Monte Carlo simulations, and the ideal expected value of the simulated missions.
NASA Astrophysics Data System (ADS)
Jakovetic, Dusan; Xavier, João; Moura, José M. F.
2011-08-01
We study distributed optimization in networked systems, where nodes cooperate to find the optimal quantity of common interest, x=x^\\star. The objective function of the corresponding optimization problem is the sum of private (known only by a node,) convex, nodes' objectives and each node imposes a private convex constraint on the allowed values of x. We solve this problem for generic connected network topologies with asymmetric random link failures with a novel distributed, decentralized algorithm. We refer to this algorithm as AL-G (augmented Lagrangian gossiping,) and to its variants as AL-MG (augmented Lagrangian multi neighbor gossiping) and AL-BG (augmented Lagrangian broadcast gossiping.) The AL-G algorithm is based on the augmented Lagrangian dual function. Dual variables are updated by the standard method of multipliers, at a slow time scale. To update the primal variables, we propose a novel, Gauss-Seidel type, randomized algorithm, at a fast time scale. AL-G uses unidirectional gossip communication, only between immediate neighbors in the network and is resilient to random link failures. For networks with reliable communication (i.e., no failures,) the simplified, AL-BG (augmented Lagrangian broadcast gossiping) algorithm reduces communication, computation and data storage cost. We prove convergence for all proposed algorithms and demonstrate by simulations the effectiveness on two applications: l_1-regularized logistic regression for classification and cooperative spectrum sensing for cognitive radio networks.
WFIRST: Resolving the Milky Way Galaxy
NASA Astrophysics Data System (ADS)
Kalirai, Jason; Conroy, Charlie; Dressler, Alan; Geha, Marla; Levesque, Emily; Lu, Jessica; Tumlinson, Jason
2018-01-01
WFIRST will yield a transformative impact in measuring and characterizing resolved stellar populations in the Milky Way. The proximity and level of detail that such populations need to be studied at directly map to all three pillars of WFIRST capabilities - sensitivity from a 2.4 meter space based telescope, resolution from 0.1" pixels, and large 0.3 degree field of view from multiple detectors. In this poster, we describe the activities of the WFIRST Science Investigation Team (SIT), "Resolving the Milky Way with WFIRST". Notional programs guiding our analysis include targeting sightlines to establish the first well-resolved large scale maps of the Galactic bulge aand central region, pockets of star formation in the disk, benchmark star clusters, and halo substructure and ultra faint dwarf satellites. As an output of this study, our team is building optimized strategies and tools to maximize stellar population science with WFIRST. This will include: new grids of IR-optimized stellar evolution and synthetic spectroscopic models; pipelines and algorithms for optimal data reduction at the WFIRST sensitivity and pixel scale; wide field simulations of Milky Way environments including new astrometric studies; and strategies and automated algorithms to find substructure and dwarf galaxies in the Milky Way through the WFIRST High Latitude Survey.
Bai, Mingsian R; Hsieh, Ping-Ju; Hur, Kur-Nan
2009-02-01
The performance of the minimum mean-square error noise reduction (MMSE-NR) algorithm in conjunction with time-recursive averaging (TRA) for noise estimation is found to be very sensitive to the choice of two recursion parameters. To address this problem in a more systematic manner, this paper proposes an optimization method to efficiently search the optimal parameters of the MMSE-TRA-NR algorithms. The objective function is based on a regression model, whereas the optimization process is carried out with the simulated annealing algorithm that is well suited for problems with many local optima. Another NR algorithm proposed in the paper employs linear prediction coding as a preprocessor for extracting the correlated portion of human speech. Objective and subjective tests were undertaken to compare the optimized MMSE-TRA-NR algorithm with several conventional NR algorithms. The results of subjective tests were processed by using analysis of variance to justify the statistic significance. A post hoc test, Tukey's Honestly Significant Difference, was conducted to further assess the pairwise difference between the NR algorithms.
Comparison of spike-sorting algorithms for future hardware implementation.
Gibson, Sarah; Judy, Jack W; Markovic, Dejan
2008-01-01
Applications such as brain-machine interfaces require hardware spike sorting in order to (1) obtain single-unit activity and (2) perform data reduction for wireless transmission of data. Such systems must be low-power, low-area, high-accuracy, automatic, and able to operate in real time. Several detection and feature extraction algorithms for spike sorting are described briefly and evaluated in terms of accuracy versus computational complexity. The nonlinear energy operator method is chosen as the optimal spike detection algorithm, being most robust over noise and relatively simple. The discrete derivatives method [1] is chosen as the optimal feature extraction method, maintaining high accuracy across SNRs with a complexity orders of magnitude less than that of traditional methods such as PCA.
Sethi, Gaurav; Saini, B S
2015-12-01
This paper presents an abdomen disease diagnostic system based on the flexi-scale curvelet transform, which uses different optimal scales for extracting features from computed tomography (CT) images. To optimize the scale of the flexi-scale curvelet transform, we propose an improved genetic algorithm. The conventional genetic algorithm assumes that fit parents will likely produce the healthiest offspring that leads to the least fit parents accumulating at the bottom of the population, reducing the fitness of subsequent populations and delaying the optimal solution search. In our improved genetic algorithm, combining the chromosomes of a low-fitness and a high-fitness individual increases the probability of producing high-fitness offspring. Thereby, all of the least fit parent chromosomes are combined with high fit parent to produce offspring for the next population. In this way, the leftover weak chromosomes cannot damage the fitness of subsequent populations. To further facilitate the search for the optimal solution, our improved genetic algorithm adopts modified elitism. The proposed method was applied to 120 CT abdominal images; 30 images each of normal subjects, cysts, tumors and stones. The features extracted by the flexi-scale curvelet transform were more discriminative than conventional methods, demonstrating the potential of our method as a diagnostic tool for abdomen diseases.
NASA Astrophysics Data System (ADS)
Khehra, Baljit Singh; Pharwaha, Amar Partap Singh
2017-04-01
Ductal carcinoma in situ (DCIS) is one type of breast cancer. Clusters of microcalcifications (MCCs) are symptoms of DCIS that are recognized by mammography. Selection of robust features vector is the process of selecting an optimal subset of features from a large number of available features in a given problem domain after the feature extraction and before any classification scheme. Feature selection reduces the feature space that improves the performance of classifier and decreases the computational burden imposed by using many features on classifier. Selection of an optimal subset of features from a large number of available features in a given problem domain is a difficult search problem. For n features, the total numbers of possible subsets of features are 2n. Thus, selection of an optimal subset of features problem belongs to the category of NP-hard problems. In this paper, an attempt is made to find the optimal subset of MCCs features from all possible subsets of features using genetic algorithm (GA), particle swarm optimization (PSO) and biogeography-based optimization (BBO). For simulation, a total of 380 benign and malignant MCCs samples have been selected from mammogram images of DDSM database. A total of 50 features extracted from benign and malignant MCCs samples are used in this study. In these algorithms, fitness function is correct classification rate of classifier. Support vector machine is used as a classifier. From experimental results, it is also observed that the performance of PSO-based and BBO-based algorithms to select an optimal subset of features for classifying MCCs as benign or malignant is better as compared to GA-based algorithm.
IceAge: Chemical Evolution of Ices during Star Formation
NASA Astrophysics Data System (ADS)
McClure, Melissa; Bailey, J.; Beck, T.; Boogert, A.; Brown, W.; Caselli, P.; Chiar, J.; Egami, E.; Fraser, H.; Garrod, R.; Gordon, K.; Ioppolo, S.; Jimenez-Serra, I.; Jorgensen, J.; Kristensen, L.; Linnartz, H.; McCoustra, M.; Murillo, N.; Noble, J.; Oberg, K.; Palumbo, M.; Pendleton, Y.; Pontoppidan, K.; Van Dishoeck, E.; Viti, S.
2017-11-01
Icy grain mantles are the main reservoir for volatile elements in star-forming regions across the Universe, as well as the formation site of pre-biotic complex organic molecules (COMs) seen in our Solar System. We propose to trace the evolution of pristine and complex ice chemistry in a representative low-mass star-forming region through observations of a: pre-stellar core, Class 0 protostar, Class I protostar, and protoplanetary disk. Comparing high spectral resolution (R 1500-3000) and sensitivity (S/N 100-300) observations from 3 to 15 um to template spectra, we will map the spatial distribution of ices down to 20-50 AU in these targets to identify when, and at what visual extinction, the formation of each ice species begins. Such high-resolution spectra will allow us to search for new COMs, as well as distinguish between different ice morphologies,thermal histories, and mixing environments. The analysis of these data will result in science products beneficial to Cycle 2 proposers. A newly updated public laboratory ice database will provide feature identifications for all of the expected ices, while a chemical model fit to the observed ice abundances will be released publically as a grid, with varied metallicity and UV fields to simulate other environments. We will create improved algorithms to extract NIRCAM WFSS spectra in crowded fields with extended sources as well as optimize the defringing of MIRI LRS spectra in order to recover broad spectral features. We anticipate that these resources will be particularly useful for astrochemistry and spectroscopy of fainter, extended targets like star forming regions of the SMC/LMC or more distant galaxies.
ECG Based Heart Arrhythmia Detection Using Wavelet Coherence and Bat Algorithm
NASA Astrophysics Data System (ADS)
Kora, Padmavathi; Sri Rama Krishna, K.
2016-12-01
Atrial fibrillation (AF) is a type of heart abnormality, during the AF electrical discharges in the atrium are rapid, results in abnormal heart beat. The morphology of ECG changes due to the abnormalities in the heart. This paper consists of three major steps for the detection of heart diseases: signal pre-processing, feature extraction and classification. Feature extraction is the key process in detecting the heart abnormality. Most of the ECG detection systems depend on the time domain features for cardiac signal classification. In this paper we proposed a wavelet coherence (WTC) technique for ECG signal analysis. The WTC calculates the similarity between two waveforms in frequency domain. Parameters extracted from WTC function is used as the features of the ECG signal. These features are optimized using Bat algorithm. The Levenberg Marquardt neural network classifier is used to classify the optimized features. The performance of the classifier can be improved with the optimized features.
Aircraft Route Optimization using the A-Star Algorithm
2014-03-27
Map Cost array allows a search for a route that not only seeks to minimize the distance travelled, but also considers other factors that may impact ...Rules (VFR) flight profile requires aviators to plan a 20-minute fuel reserve into the flight while an Instrument Flight Rules ( IFR ) flight profile
Computationally efficient algorithms for real-time attitude estimation
NASA Technical Reports Server (NTRS)
Pringle, Steven R.
1993-01-01
For many practical spacecraft applications, algorithms for determining spacecraft attitude must combine inputs from diverse sensors and provide redundancy in the event of sensor failure. A Kalman filter is suitable for this task, however, it may impose a computational burden which may be avoided by sub optimal methods. A suboptimal estimator is presented which was implemented successfully on the Delta Star spacecraft which performed a 9 month SDI flight experiment in 1989. This design sought to minimize algorithm complexity to accommodate the limitations of an 8K guidance computer. The algorithm used is interpreted in the framework of Kalman filtering and a derivation is given for the computation.
Xi, Jun; Xue, Yujing; Xu, Yinxiang; Shen, Yuhong
2013-11-01
In this study, the ultrahigh pressure extraction of green tea polyphenols was modeled and optimized by a three-layer artificial neural network. A feed-forward neural network trained with an error back-propagation algorithm was used to evaluate the effects of pressure, liquid/solid ratio and ethanol concentration on the total phenolic content of green tea extracts. The neural network coupled with genetic algorithms was also used to optimize the conditions needed to obtain the highest yield of tea polyphenols. The obtained optimal architecture of artificial neural network model involved a feed-forward neural network with three input neurons, one hidden layer with eight neurons and one output layer including single neuron. The trained network gave the minimum value in the MSE of 0.03 and the maximum value in the R(2) of 0.9571, which implied a good agreement between the predicted value and the actual value, and confirmed a good generalization of the network. Based on the combination of neural network and genetic algorithms, the optimum extraction conditions for the highest yield of green tea polyphenols were determined as follows: 498.8 MPa for pressure, 20.8 mL/g for liquid/solid ratio and 53.6% for ethanol concentration. The total phenolic content of the actual measurement under the optimum predicated extraction conditions was 582.4 ± 0.63 mg/g DW, which was well matched with the predicted value (597.2mg/g DW). This suggests that the artificial neural network model described in this work is an efficient quantitative tool to predict the extraction efficiency of green tea polyphenols. Crown Copyright © 2013. Published by Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Cheng, Jun; Zhang, Jun; Tian, Jinwen
2015-12-01
Based on deep analysis of the LiveWire interactive boundary extraction algorithm, a new algorithm focusing on improving the speed of LiveWire algorithm is proposed in this paper. Firstly, the Haar wavelet transform is carried on the input image, and the boundary is extracted on the low resolution image obtained by the wavelet transform of the input image. Secondly, calculating LiveWire shortest path is based on the control point set direction search by utilizing the spatial relationship between the two control points users provide in real time. Thirdly, the search order of the adjacent points of the starting node is set in advance. An ordinary queue instead of a priority queue is taken as the storage pool of the points when optimizing their shortest path value, thus reducing the complexity of the algorithm from O[n2] to O[n]. Finally, A region iterative backward projection method based on neighborhood pixel polling has been used to convert dual-pixel boundary of the reconstructed image to single-pixel boundary after Haar wavelet inverse transform. The algorithm proposed in this paper combines the advantage of the Haar wavelet transform and the advantage of the optimal path searching method based on control point set direction search. The former has fast speed of image decomposition and reconstruction and is more consistent with the texture features of the image and the latter can reduce the time complexity of the original algorithm. So that the algorithm can improve the speed in interactive boundary extraction as well as reflect the boundary information of the image more comprehensively. All methods mentioned above have a big role in improving the execution efficiency and the robustness of the algorithm.
NASA Astrophysics Data System (ADS)
Perez, Adrianna; Moreno, Jorge; Naiman, Jill; Ramirez-Ruiz, Enrico; Hopkins, Philip F.
2017-01-01
In this work, we analyze the environments surrounding star clusters of simulated merging galaxies. Our framework employs Feedback In Realistic Environments (FIRE) model (Hopkins et al., 2014). The FIRE project is a high resolution cosmological simulation that resolves star forming regions and incorporates stellar feedback in a physically realistic way. The project focuses on analyzing the properties of the star clusters formed in merging galaxies. The locations of these star clusters are identified with astrodendro.py, a publicly available dendrogram algorithm. Once star cluster properties are extracted, they will be used to create a sub-grid (smaller than the resolution scale of FIRE) of gas confinement in these clusters. Then, we can examine how the star clusters interact with these available gas reservoirs (either by accreting this mass or blowing it out via feedback), which will determine many properties of the cluster (star formation history, compact object accretion, etc). These simulations will further our understanding of star formation within stellar clusters during galaxy evolution. In the future, we aim to enhance sub-grid prescriptions for feedback specific to processes within star clusters; such as, interaction with stellar winds and gas accretion onto black holes and neutron stars.
A new tool for post-AGB SED classification
NASA Astrophysics Data System (ADS)
Bendjoya, P.; Suarez, O.; Galluccio, L.; Michel, O.
We present the results of an unsupervised classification method applied on a set of 344 spectral energy distributions (SED) of post-AGB stars extracted from the Torun catalogue of Galactic post-AGB stars. This method aims to find a new unbiased method for post-AGB star classification based on the information contained in the IR region of the SED (fluxes, IR excess, colours). We used the data from IRAS and MSX satellites, and from the 2MASS survey. We applied a classification method based on the construction of the dataset of a minimal spanning tree (MST) with the Prim's algorithm. In order to build this tree, different metrics have been tested on both flux and color indices. Our method is able to classify the set of 344 post-AGB stars in 9 distinct groups according to their SEDs.
Spectral analysis of stellar light curves by means of neural networks
NASA Astrophysics Data System (ADS)
Tagliaferri, R.; Ciaramella, A.; Milano, L.; Barone, F.; Longo, G.
1999-06-01
Periodicity analysis of unevenly collected data is a relevant issue in several scientific fields. In astrophysics, for example, we have to find the fundamental period of light or radial velocity curves which are unevenly sampled observations of stars. Classical spectral analysis methods are unsatisfactory to solve the problem. In this paper we present a neural network based estimator system which performs well the frequency extraction in unevenly sampled signals. It uses an unsupervised Hebbian nonlinear neural algorithm to extract, from the interpolated signal, the principal components which, in turn, are used by the MUSIC frequency estimator algorithm to extract the frequencies. The neural network is tolerant to noise and works well also with few points in the sequence. We benchmark the system on synthetic and real signals with the Periodogram and with the Cramer-Rao lower bound. This work was been partially supported by IIASS, by MURST 40\\% and by the Italian Space Agency.
Pure endmember extraction using robust kernel archetypoid analysis for hyperspectral imagery
NASA Astrophysics Data System (ADS)
Sun, Weiwei; Yang, Gang; Wu, Ke; Li, Weiyue; Zhang, Dianfa
2017-09-01
A robust kernel archetypoid analysis (RKADA) method is proposed to extract pure endmembers from hyperspectral imagery (HSI). The RKADA assumes that each pixel is a sparse linear mixture of all endmembers and each endmember corresponds to a real pixel in the image scene. First, it improves the re8gular archetypal analysis with a new binary sparse constraint, and the adoption of the kernel function constructs the principal convex hull in an infinite Hilbert space and enlarges the divergences between pairwise pixels. Second, the RKADA transfers the pure endmember extraction problem into an optimization problem by minimizing residual errors with the Huber loss function. The Huber loss function reduces the effects from big noises and outliers in the convergence procedure of RKADA and enhances the robustness of the optimization function. Third, the random kernel sinks for fast kernel matrix approximation and the two-stage algorithm for optimizing initial pure endmembers are utilized to improve its computational efficiency in realistic implementations of RKADA, respectively. The optimization equation of RKADA is solved by using the block coordinate descend scheme and the desired pure endmembers are finally obtained. Six state-of-the-art pure endmember extraction methods are employed to make comparisons with the RKADA on both synthetic and real Cuprite HSI datasets, including three geometrical algorithms vertex component analysis (VCA), alternative volume maximization (AVMAX) and orthogonal subspace projection (OSP), and three matrix factorization algorithms the preconditioning for successive projection algorithm (PreSPA), hierarchical clustering based on rank-two nonnegative matrix factorization (H2NMF) and self-dictionary multiple measurement vector (SDMMV). Experimental results show that the RKADA outperforms all the six methods in terms of spectral angle distance (SAD) and root-mean-square-error (RMSE). Moreover, the RKADA has short computational times in offline operations and shows significant improvement in identifying pure endmembers for ground objects with smaller spectrum differences. Therefore, the RKADA could be an alternative for pure endmember extraction from hyperspectral images.
Active surface model improvement by energy function optimization for 3D segmentation.
Azimifar, Zohreh; Mohaddesi, Mahsa
2015-04-01
This paper proposes an optimized and efficient active surface model by improving the energy functions, searching method, neighborhood definition and resampling criterion. Extracting an accurate surface of the desired object from a number of 3D images using active surface and deformable models plays an important role in computer vision especially medical image processing. Different powerful segmentation algorithms have been suggested to address the limitations associated with the model initialization, poor convergence to surface concavities and slow convergence rate. This paper proposes a method to improve one of the strongest and recent segmentation algorithms, namely the Decoupled Active Surface (DAS) method. We consider a gradient of wavelet edge extracted image and local phase coherence as external energy to extract more information from images and we use curvature integral as internal energy to focus on high curvature region extraction. Similarly, we use resampling of points and a line search for point selection to improve the accuracy of the algorithm. We further employ an estimation of the desired object as an initialization for the active surface model. A number of tests and experiments have been done and the results show the improvements with regards to the extracted surface accuracy and computational time of the presented algorithm compared with the best and recent active surface models. Copyright © 2015 Elsevier Ltd. All rights reserved.
Earth resources data analysis program, phase 3
NASA Technical Reports Server (NTRS)
1975-01-01
Tasks were performed in two areas: (1) systems analysis and (2) algorithmic development. The major effort in the systems analysis task was the development of a recommended approach to the monitoring of resource utilization data for the Large Area Crop Inventory Experiment (LACIE). Other efforts included participation in various studies concerning the LACIE Project Plan, the utility of the GE Image 100, and the specifications for a special purpose processor to be used in the LACIE. In the second task, the major effort was the development of improved algorithms for estimating proportions of unclassified remotely sensed data. Also, work was performed on optimal feature extraction and optimal feature extraction for proportion estimation.
NASA Astrophysics Data System (ADS)
Zhang, Shuo; Shi, Xiaodong; Udpa, Lalita; Deng, Yiming
2018-05-01
Magnetic Barkhausen noise (MBN) is measured in low carbon steels and the relationship between carbon content and parameter extracted from MBN signal has been investigated. The parameter is extracted experimentally by fitting the original profiles with two Gaussian curves. The gap between two peaks (ΔG) of fitted Gaussian curves shows a better linear relationship with carbon contents of samples in the experiment. The result has been validated with simulation by Monte Carlo method. To ensure the sensitivity of measurement, advanced multi-objective optimization algorithm Non-dominant sorting genetic algorithm III (NSGA III) has been used to fulfill the optimization of the magnetic core of sensor.
Detection of faults in rotating machinery using periodic time-frequency sparsity
NASA Astrophysics Data System (ADS)
Ding, Yin; He, Wangpeng; Chen, Binqiang; Zi, Yanyang; Selesnick, Ivan W.
2016-11-01
This paper addresses the problem of extracting periodic oscillatory features in vibration signals for detecting faults in rotating machinery. To extract the feature, we propose an approach in the short-time Fourier transform (STFT) domain where the periodic oscillatory feature manifests itself as a relatively sparse grid. To estimate the sparse grid, we formulate an optimization problem using customized binary weights in the regularizer, where the weights are formulated to promote periodicity. In order to solve the proposed optimization problem, we develop an algorithm called augmented Lagrangian majorization-minimization algorithm, which combines the split augmented Lagrangian shrinkage algorithm (SALSA) with majorization-minimization (MM), and is guaranteed to converge for both convex and non-convex formulation. As examples, the proposed approach is applied to simulated data, and used as a tool for diagnosing faults in bearings and gearboxes for real data, and compared to some state-of-the-art methods. The results show that the proposed approach can effectively detect and extract the periodical oscillatory features.
Online particle detection with Neural Networks based on topological calorimetry information
NASA Astrophysics Data System (ADS)
Ciodaro, T.; Deva, D.; de Seixas, J. M.; Damazio, D.
2012-06-01
This paper presents the latest results from the Ringer algorithm, which is based on artificial neural networks for the electron identification at the online filtering system of the ATLAS particle detector, in the context of the LHC experiment at CERN. The algorithm performs topological feature extraction using the ATLAS calorimetry information (energy measurements). The extracted information is presented to a neural network classifier. Studies showed that the Ringer algorithm achieves high detection efficiency, while keeping the false alarm rate low. Optimizations, guided by detailed analysis, reduced the algorithm execution time by 59%. Also, the total memory necessary to store the Ringer algorithm information represents less than 6.2 percent of the total filtering system amount.
Optimizing searches for electromagnetic counterparts of gravitational wave triggers
NASA Astrophysics Data System (ADS)
Coughlin, Michael W.; Tao, Duo; Chan, Man Leong; Chatterjee, Deep; Christensen, Nelson; Ghosh, Shaon; Greco, Giuseppe; Hu, Yiming; Kapadia, Shasvath; Rana, Javed; Salafia, Om Sharan; Stubbs11, Christopher
2018-04-01
With the detection of a binary neutron star system and its corresponding electromagnetic counterparts, a new window of transient astronomy has opened. Due to the size of the sky localization regions, which can span hundreds to thousands of square degrees, there are significant benefits to optimizing tilings for these large sky areas. The rich science promised by gravitational-wave astronomy has led to the proposal for a variety of proposed tiling and time allocation schemes, and for the first time, we make a systematic comparison of some of these methods. We find that differences of a factor of 2 or more in efficiency are possible, depending on the algorithm employed. For this reason, with future surveys searching for electromagnetic counterparts, care should be taken when selecting tiling, time allocation, and scheduling algorithms to optimize counterpart detection.
Optimizing searches for electromagnetic counterparts of gravitational wave triggers
NASA Astrophysics Data System (ADS)
Coughlin, Michael W.; Tao, Duo; Chan, Man Leong; Chatterjee, Deep; Christensen, Nelson; Ghosh, Shaon; Greco, Giuseppe; Hu, Yiming; Kapadia, Shasvath; Rana, Javed; Salafia, Om Sharan; Stubbs, Christopher W.
2018-07-01
With the detection of a binary neutron star system and its corresponding electromagnetic counterparts, a new window of transient astronomy has opened. Due to the size of the sky localization regions, which can span hundreds to thousands of square degrees, there are significant benefits to optimizing tilings for these large sky areas. The rich science promised by gravitational wave astronomy has led to the proposal for a variety of proposed tiling and time allocation schemes, and for the first time, we make a systematic comparison of some of these methods. We find that differences of a factor of 2 or more in efficiency are possible, depending on the algorithm employed. For this reason, with future surveys searching for electromagnetic counterparts, care should be taken when selecting tiling, time allocation, and scheduling algorithms to optimize counterpart detection.
Testing the TPF Interferometry Approach before Launch
NASA Technical Reports Server (NTRS)
Serabyn, Eugene; Mennesson, Bertrand
2006-01-01
One way to directly detect nearby extra-solar planets is via their thermal infrared emission, and with this goal in mind, both NASA and ESA are investigating cryogenic infrared interferometers. Common to both agencies' approaches to faint off-axis source detection near bright stars is the use of a rotating nulling interferometer, such as the Terrestrial Planet Finder interferometer (TPF-I), or Darwin. In this approach, the central star is nulled, while the emission from off-axis sources is transmitted and modulated by the rotation of the off-axis fringes. Because of the high contrasts involved, and the novelty of the measurement technique, it is essential to gain experience with this technique before launch. Here we describe a simple ground-based experiment that can test the essential aspects of the TPF signal measurement and image reconstruction approaches by generating a rotating interferometric baseline within the pupil of a large singleaperture telescope. This approach can mimic potential space-based interferometric configurations, and allow the extraction of signals from off-axis sources using the same algorithms proposed for the space-based missions. This approach should thus allow for testing of the applicability of proposed signal extraction algorithms for the detection of single and multiple near-neighbor companions...
NASA Astrophysics Data System (ADS)
Choi, Woo Young; Woo, Dong-Soo; Choi, Byung Yong; Lee, Jong Duk; Park, Byung-Gook
2004-04-01
We proposed a stable extraction algorithm for threshold voltage using transconductance change method by optimizing node interval. With the algorithm, noise-free gm2 (=dgm/dVGS) profiles can be extracted within one-percent error, which leads to more physically-meaningful threshold voltage calculation by the transconductance change method. The extracted threshold voltage predicts the gate-to-source voltage at which the surface potential is within kT/q of φs=2φf+VSB. Our algorithm makes the transconductance change method more practical by overcoming noise problem. This threshold voltage extraction algorithm yields the threshold roll-off behavior of nanoscale metal oxide semiconductor field effect transistor (MOSFETs) accurately and makes it possible to calculate the surface potential φs at any other point on the drain-to-source current (IDS) versus gate-to-source voltage (VGS) curve. It will provide us with a useful analysis tool in the field of device modeling, simulation and characterization.
A novel star extraction method based on modified water flow model
NASA Astrophysics Data System (ADS)
Zhang, Hao; Niu, Yanxiong; Lu, Jiazhen; Ouyang, Zibiao; Yang, Yanqiang
2017-11-01
Star extraction is the essential procedure for attitude measurement of star sensor. The great challenge for star extraction is to segment star area exactly from various noise and background. In this paper, a novel star extraction method based on Modified Water Flow Model(MWFM) is proposed. The star image is regarded as a 3D terrain. The morphology is adopted for noise elimination and Tentative Star Area(TSA) selection. Star area can be extracted through adaptive water flowing within TSAs. This method can achieve accurate star extraction with improved efficiency under complex conditions such as loud noise and uneven backgrounds. Several groups of different types of star images are processed using proposed method. Comparisons with existing methods are conducted. Experimental results show that MWFM performs excellently under different imaging conditions. The star extraction rate is better than 95%. The star centroid accuracy is better than 0.075 pixels. The time-consumption is also significantly reduced.
Optimized star sensors laboratory calibration method using a regularization neural network.
Zhang, Chengfen; Niu, Yanxiong; Zhang, Hao; Lu, Jiazhen
2018-02-10
High-precision ground calibration is essential to ensure the performance of star sensors. However, the complex distortion and multi-error coupling have brought great difficulties to traditional calibration methods, especially for large field of view (FOV) star sensors. Although increasing the complexity of models is an effective way to improve the calibration accuracy, it significantly increases the demand for calibration data. In order to achieve high-precision calibration of star sensors with large FOV, a novel laboratory calibration method based on a regularization neural network is proposed. A multi-layer structure neural network is designed to represent the mapping of the star vector and the corresponding star point coordinate directly. To ensure the generalization performance of the network, regularization strategies are incorporated into the net structure and the training algorithm. Simulation and experiment results demonstrate that the proposed method can achieve high precision with less calibration data and without any other priori information. Compared with traditional methods, the calibration error of the star sensor decreased by about 30%. The proposed method can satisfy the precision requirement for large FOV star sensors.
Using Machine Learning To Predict Which Light Curves Will Yield Stellar Rotation Periods
NASA Astrophysics Data System (ADS)
Agüeros, Marcel; Teachey, Alexander
2018-01-01
Using time-domain photometry to reliably measure a solar-type star's rotation period requires that its light curve have a number of favorable characteristics. The probability of recovering a period will be a non-linear function of these light curve features, which are either astrophysical in nature or set by the observations. We employ standard machine learning algorithms (artificial neural networks and random forests) to predict whether a given light curve will produce a robust rotation period measurement from its Lomb-Scargle periodogram. The algorithms are trained and validated using salient statistics extracted from both simulated light curves and their corresponding periodograms, and we apply these classifiers to the most recent Intermediate Palomar Transient Factory (iPTF) data release. With this pipeline, we anticipate measuring rotation periods for a significant fraction of the ∼4x108 stars in the iPTF footprint.
Star adaptation for two-algorithms used on serial computers
NASA Technical Reports Server (NTRS)
Howser, L. M.; Lambiotte, J. J., Jr.
1974-01-01
Two representative algorithms used on a serial computer and presently executed on the Control Data Corporation 6000 computer were adapted to execute efficiently on the Control Data STAR-100 computer. Gaussian elimination for the solution of simultaneous linear equations and the Gauss-Legendre quadrature formula for the approximation of an integral are the two algorithms discussed. A description is given of how the programs were adapted for STAR and why these adaptations were necessary to obtain an efficient STAR program. Some points to consider when adapting an algorithm for STAR are discussed. Program listings of the 6000 version coded in 6000 FORTRAN, the adapted STAR version coded in 6000 FORTRAN, and the STAR version coded in STAR FORTRAN are presented in the appendices.
Research on Optimization of GLCM Parameter in Cell Classification
NASA Astrophysics Data System (ADS)
Zhang, Xi-Kun; Hou, Jie; Hu, Xin-Hua
2016-05-01
Real-time classification of biological cells according to their 3D morphology is highly desired in a flow cytometer setting. Gray level co-occurrence matrix (GLCM) algorithm has been developed to extract feature parameters from measured diffraction images ,which are too complicated to coordinate with the real-time system for a large amount of calculation. An optimization of GLCM algorithm is provided based on correlation analysis of GLCM parameters. The results of GLCM analysis and subsequent classification demonstrate optimized method can lower the time complexity significantly without loss of classification accuracy.
PSO-Based Smart Grid Application for Sizing and Optimization of Hybrid Renewable Energy Systems
Mohamed, Mohamed A.; Eltamaly, Ali M.; Alolah, Abdulrahman I.
2016-01-01
This paper introduces an optimal sizing algorithm for a hybrid renewable energy system using smart grid load management application based on the available generation. This algorithm aims to maximize the system energy production and meet the load demand with minimum cost and highest reliability. This system is formed by photovoltaic array, wind turbines, storage batteries, and diesel generator as a backup source of energy. Demand profile shaping as one of the smart grid applications is introduced in this paper using load shifting-based load priority. Particle swarm optimization is used in this algorithm to determine the optimum size of the system components. The results obtained from this algorithm are compared with those from the iterative optimization technique to assess the adequacy of the proposed algorithm. The study in this paper is performed in some of the remote areas in Saudi Arabia and can be expanded to any similar regions around the world. Numerous valuable results are extracted from this study that could help researchers and decision makers. PMID:27513000
PSO-Based Smart Grid Application for Sizing and Optimization of Hybrid Renewable Energy Systems.
Mohamed, Mohamed A; Eltamaly, Ali M; Alolah, Abdulrahman I
2016-01-01
This paper introduces an optimal sizing algorithm for a hybrid renewable energy system using smart grid load management application based on the available generation. This algorithm aims to maximize the system energy production and meet the load demand with minimum cost and highest reliability. This system is formed by photovoltaic array, wind turbines, storage batteries, and diesel generator as a backup source of energy. Demand profile shaping as one of the smart grid applications is introduced in this paper using load shifting-based load priority. Particle swarm optimization is used in this algorithm to determine the optimum size of the system components. The results obtained from this algorithm are compared with those from the iterative optimization technique to assess the adequacy of the proposed algorithm. The study in this paper is performed in some of the remote areas in Saudi Arabia and can be expanded to any similar regions around the world. Numerous valuable results are extracted from this study that could help researchers and decision makers.
A Brightness-Referenced Star Identification Algorithm for APS Star Trackers
Zhang, Peng; Zhao, Qile; Liu, Jingnan; Liu, Ning
2014-01-01
Star trackers are currently the most accurate spacecraft attitude sensors. As a result, they are widely used in remote sensing satellites. Since traditional charge-coupled device (CCD)-based star trackers have a limited sensitivity range and dynamic range, the matching process for a star tracker is typically not very sensitive to star brightness. For active pixel sensor (APS) star trackers, the intensity of an imaged star is valuable information that can be used in star identification process. In this paper an improved brightness referenced star identification algorithm is presented. This algorithm utilizes the k-vector search theory and adds imaged stars' intensities to narrow the search scope and therefore increase the efficiency of the matching process. Based on different imaging conditions (slew, bright bodies, etc.) the developed matching algorithm operates in one of two identification modes: a three-star mode, and a four-star mode. If the reference bright stars (the stars brighter than three magnitude) show up, the algorithm runs the three-star mode and efficiency is further improved. The proposed method was compared with other two distinctive methods the pyramid and geometric voting methods. All three methods were tested with simulation data and actual in orbit data from the APS star tracker of ZY-3. Using a catalog composed of 1500 stars, the results show that without false stars the efficiency of this new method is 4∼5 times that of the pyramid method and 35∼37 times that of the geometric method. PMID:25299950
A brightness-referenced star identification algorithm for APS star trackers.
Zhang, Peng; Zhao, Qile; Liu, Jingnan; Liu, Ning
2014-10-08
Star trackers are currently the most accurate spacecraft attitude sensors. As a result, they are widely used in remote sensing satellites. Since traditional charge-coupled device (CCD)-based star trackers have a limited sensitivity range and dynamic range, the matching process for a star tracker is typically not very sensitive to star brightness. For active pixel sensor (APS) star trackers, the intensity of an imaged star is valuable information that can be used in star identification process. In this paper an improved brightness referenced star identification algorithm is presented. This algorithm utilizes the k-vector search theory and adds imaged stars' intensities to narrow the search scope and therefore increase the efficiency of the matching process. Based on different imaging conditions (slew, bright bodies, etc.) the developed matching algorithm operates in one of two identification modes: a three-star mode, and a four-star mode. If the reference bright stars (the stars brighter than three magnitude) show up, the algorithm runs the three-star mode and efficiency is further improved. The proposed method was compared with other two distinctive methods the pyramid and geometric voting methods. All three methods were tested with simulation data and actual in orbit data from the APS star tracker of ZY-3. Using a catalog composed of 1500 stars, the results show that without false stars the efficiency of this new method is 4~5 times that of the pyramid method and 35~37 times that of the geometric method.
VizieR Online Data Catalog: Proper motions of PM2000 open clusters (Krone-Martins+, 2010)
NASA Astrophysics Data System (ADS)
Krone-Martins, A.; Soubiran, C.; Ducourant, C.; Teixeira, R.; Le Campion, J. F.
2010-04-01
We present lists of proper-motions and kinematic membership probabilities in the region of 49 open clusters or possible open clusters. The stellar proper motions were taken from the Bordeaux PM2000 catalogue. The segregation between cluster and field stars and the assignment of membership probabilities was accomplished by applying a fully automated method based on parametrisations for the probability distribution functions and genetic algorithm optimisation heuristics associated with a derivative-based hill climbing algorithm for the likelihood optimization. (3 data files).
Cosmic Web of Galaxies in the COMOS Field
NASA Astrophysics Data System (ADS)
Darvish, Behnam; Martin, Christopher D.; Mobasher, Bahram; Scoville, Nicholas; Sobral, David; COSMOS science Team
2017-01-01
We use a mass complete sample of galaxies with accurate photometric redshifts in the COSMOS field to estimate the density field and to extract the components of the cosmic web. The comic web extraction algorithm relies on the signs and the ratio of eigenvalues of the Hessian matrix and is enable to integrate the density field into clusters, filaments and the field. We show that at z < 0.8, the median star-formation rate in the cosmic web gradually declines from the field to clusters and this decline is especially sharp for satellite galaxies (~1 dex vs. ~0.4 dex for centrals). However, at z > 0.8, the trend flattens out. For star-forming galaxies only, the median star-formation rate declines by ~ 0.3-0.4 dex from the field to clusters for both satellites and centrals, only at z < 0.5. We argue that for satellite galaxies, the main role of the cosmic web environment is to control their star-forming/quiescent fraction, whereas for centrals, it is mainly to control their overall star-formation rate. Given these, we suggest that most satellite galaxies experience a rapid quenching mechanism as they fall from the field into clusters through the channel of filaments, whereas for central galaxies, it is mostly due to a slow quenching process. Our preliminary results highlight the importance of the large-scale cosmic web on the evolution of galaxies.
The ATLASGAL survey: a catalog of dust condensations in the Galactic plane
NASA Astrophysics Data System (ADS)
Csengeri, T.; Urquhart, J. S.; Schuller, F.; Motte, F.; Bontemps, S.; Wyrowski, F.; Menten, K. M.; Bronfman, L.; Beuther, H.; Henning, Th.; Testi, L.; Zavagno, A.; Walmsley, M.
2014-05-01
Context. The formation processes and the evolutionary stages of high-mass stars are poorly understood compared to low-mass stars. Large-scale surveys are needed to provide an unbiased census of high column density sites that can potentially host precursors to high-mass stars. Aims: The ATLASGAL survey covers 420 sq. degree of the Galactic plane, between -80° < ℓ < +60° at 870 μm. Here we identify the population of embedded sources throughout the inner Galaxy. With this catalog we first investigate the general statistical properties of dust condensations in terms of their observed parameters, such as flux density and angular size. Then using mid-infrared surveys we aim to investigate their star formation activity and the Galactic distribution of star-forming and quiescent clumps. Our ultimate goal is to determine the statistical properties of quiescent and star-forming clumps within the Galaxy and to constrain the star formation processes. Methods: We optimized the source extraction method, referred to as MRE-GCL, for the ATLASGAL maps in order to generate a catalog of compact sources. This technique is based on multiscale filtering to remove extended emission from clouds to better determine the parameters corresponding to the embedded compact sources. In a second step we extracted the sources by fitting 2D Gaussians with the Gaussclumps algorithm. Results: We have identified in total 10861 compact submillimeter sources with fluxes above 5σ. Completeness tests show that this catalog is 97% complete above 5σ and >99% complete above 7σ. Correlating this sample of clumps with mid-infrared point source catalogs (MSX at 21.3 μm and WISE at 22 μm), we have determined a lower limit of 33% that is associated with embedded protostellar objects. We note that the proportion of clumps associated with mid-infrared sources increases with increasing flux density, achieving a rather constant fraction of ~75% of all clumps with fluxes over 5 Jy/beam being associated with star formation. Examining the source counts as a function of Galactic longitude, we are able to identify the most prominent star-forming regions in the Galaxy. Conclusions: We present here the compact source catalog of the full ATLASGAL survey and investigate their characteristic properties. From the fraction of the likely massive quiescent clumps (~25%), we estimate a formation time scale of ~ 7.5 ± 2.5 × 104 yr for the deeply embedded phase before the emergence of luminous young stellar objects. Such a short duration for the formation of high-mass stars in massive clumps clearly proves that the earliest phases have to be dynamic with supersonic motions. Full Table 1 is only available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (ftp://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/565/A75
Deep coupling of star tracker and MEMS-gyro data under highly dynamic and long exposure conditions
NASA Astrophysics Data System (ADS)
Sun, Ting; Xing, Fei; You, Zheng; Wang, Xiaochu; Li, Bin
2014-08-01
Star trackers and gyroscopes are the two most widely used attitude measurement devices in spacecrafts. The star tracker is supposed to have the highest accuracy in stable conditions among different types of attitude measurement devices. In general, to detect faint stars and reduce the size of the star tracker, a method with long exposure time method is usually used. Thus, under dynamic conditions, smearing of the star image may appear and result in decreased accuracy or even failed extraction of the star spot. This may cause inaccuracies in attitude measurement. Gyros have relatively good dynamic performance and are usually used in combination with star trackers. However, current combination methods focus mainly on the data fusion of the output attitude data levels, which are inadequate for utilizing and processing internal blurred star image information. A method for tracking deep coupling stars and MEMS-gyro data is proposed in this work. The method achieves deep fusion at the star image level. First, dynamic star image processing is performed based on the angular velocity information of the MEMS-gyro. Signal-to-noise ratio (SNR) of the star spot could be improved, and extraction is achieved more effectively. Then, a prediction model for optimal estimation of the star spot position is obtained through the MEMS-gyro, and an extended Kalman filter is introduced. Meanwhile, the MEMS-gyro drift can be estimated and compensated though the proposed method. These enable the star tracker to achieve high star centroid determination accuracy under dynamic conditions. The MEMS-gyro drift can be corrected even when attitude data of the star tracker are unable to be solved and only one navigation star is captured in the field of view. Laboratory experiments were performed to verify the effectiveness of the proposed method and the whole system.
NASA Astrophysics Data System (ADS)
Shirenin, A. M.; Mazurova, E. M.; Bagrov, A. V.
2016-11-01
The paper presents a mathematical algorithm for processing an array of angular measurements of light beacons on images of the lunar surface onboard a polar artificial lunar satellite (PALS) during the Luna-Glob mission and coordinate-time referencing of the PALS for the development of reference selenocentric coordinate systems. The algorithm makes it possible to obtain angular positions of point light beacons located on the surface of the Moon in selenocentric celestial coordinates. The operation of measurement systems that determine the position and orientation of the PALS during its active existence have been numerically simulated. Recommendations have been made for the optimal use of different types of measurements, including ground radio trajectory measurements, navigational star sensors based on the onboard star catalog, gyroscopic orientation systems, and space videos of the lunar surface.
Optimum oil production planning using infeasibility driven evolutionary algorithm.
Singh, Hemant Kumar; Ray, Tapabrata; Sarker, Ruhul
2013-01-01
In this paper, we discuss a practical oil production planning optimization problem. For oil wells with insufficient reservoir pressure, gas is usually injected to artificially lift oil, a practice commonly referred to as enhanced oil recovery (EOR). The total gas that can be used for oil extraction is constrained by daily availability limits. The oil extracted from each well is known to be a nonlinear function of the gas injected into the well and varies between wells. The problem is to identify the optimal amount of gas that needs to be injected into each well to maximize the amount of oil extracted subject to the constraint on the total daily gas availability. The problem has long been of practical interest to all major oil exploration companies as it has the potential to derive large financial benefit. In this paper, an infeasibility driven evolutionary algorithm is used to solve a 56 well reservoir problem which demonstrates its efficiency in solving constrained optimization problems. Furthermore, a multi-objective formulation of the problem is posed and solved using a number of algorithms, which eliminates the need for solving the (single objective) problem on a regular basis. Lastly, a modified single objective formulation of the problem is also proposed, which aims to maximize the profit instead of the quantity of oil. It is shown that even with a lesser amount of oil extracted, more economic benefits can be achieved through the modified formulation.
A Novel, Real-Valued Genetic Algorithm for Optimizing Radar Absorbing Materials
NASA Technical Reports Server (NTRS)
Hall, John Michael
2004-01-01
A novel, real-valued Genetic Algorithm (GA) was designed and implemented to minimize the reflectivity and/or transmissivity of an arbitrary number of homogeneous, lossy dielectric or magnetic layers of arbitrary thickness positioned at either the center of an infinitely long rectangular waveguide, or adjacent to the perfectly conducting backplate of a semi-infinite, shorted-out rectangular waveguide. Evolutionary processes extract the optimal physioelectric constants falling within specified constraints which minimize reflection and/or transmission over the frequency band of interest. This GA extracted the unphysical dielectric and magnetic constants of three layers of fictitious material placed adjacent to the conducting backplate of a shorted-out waveguide such that the reflectivity of the configuration was 55 dB or less over the entire X-band. Examples of the optimization of realistic multi-layer absorbers are also presented. Although typical Genetic Algorithms require populations of many thousands in order to function properly and obtain correct results, verified correct results were obtained for all test cases using this GA with a population of only four.
Design of Automatic Extraction Algorithm of Knowledge Points for MOOCs
Chen, Haijian; Han, Dongmei; Zhao, Lina
2015-01-01
In recent years, Massive Open Online Courses (MOOCs) are very popular among college students and have a powerful impact on academic institutions. In the MOOCs environment, knowledge discovery and knowledge sharing are very important, which currently are often achieved by ontology techniques. In building ontology, automatic extraction technology is crucial. Because the general methods of text mining algorithm do not have obvious effect on online course, we designed automatic extracting course knowledge points (AECKP) algorithm for online course. It includes document classification, Chinese word segmentation, and POS tagging for each document. Vector Space Model (VSM) is used to calculate similarity and design the weight to optimize the TF-IDF algorithm output values, and the higher scores will be selected as knowledge points. Course documents of “C programming language” are selected for the experiment in this study. The results show that the proposed approach can achieve satisfactory accuracy rate and recall rate. PMID:26448738
Sorting on STAR. [CDC computer algorithm timing comparison
NASA Technical Reports Server (NTRS)
Stone, H. S.
1978-01-01
Timing comparisons are given for three sorting algorithms written for the CDC STAR computer. One algorithm is Hoare's (1962) Quicksort, which is the fastest or nearly the fastest sorting algorithm for most computers. A second algorithm is a vector version of Quicksort that takes advantage of the STAR's vector operations. The third algorithm is an adaptation of Batcher's (1968) sorting algorithm, which makes especially good use of vector operations but has a complexity of N(log N)-squared as compared with a complexity of N log N for the Quicksort algorithms. In spite of its worse complexity, Batcher's sorting algorithm is competitive with the serial version of Quicksort for vectors up to the largest that can be treated by STAR. Vector Quicksort outperforms the other two algorithms and is generally preferred. These results indicate that unusual instruction sets can introduce biases in program execution time that counter results predicted by worst-case asymptotic complexity analysis.
Li, Yang; Li, Guoqing; Wang, Zhenhao
2015-01-01
In order to overcome the problems of poor understandability of the pattern recognition-based transient stability assessment (PRTSA) methods, a new rule extraction method based on extreme learning machine (ELM) and an improved Ant-miner (IAM) algorithm is presented in this paper. First, the basic principles of ELM and Ant-miner algorithm are respectively introduced. Then, based on the selected optimal feature subset, an example sample set is generated by the trained ELM-based PRTSA model. And finally, a set of classification rules are obtained by IAM algorithm to replace the original ELM network. The novelty of this proposal is that transient stability rules are extracted from an example sample set generated by the trained ELM-based transient stability assessment model by using IAM algorithm. The effectiveness of the proposed method is shown by the application results on the New England 39-bus power system and a practical power system--the southern power system of Hebei province.
Zhou, Pei-pei; Shan, Jin-feng; Jiang, Jian-lan
2015-12-01
To optimize the optimal microwave-assisted extraction method of curcuminoids from Curcuma longa. On the base of single factor experiment, the ethanol concentration, the ratio of liquid to solid and the microwave time were selected for further optimization. Support Vector Regression (SVR) and Central Composite Design-Response Surface Methodology (CCD) algorithm were utilized to design and establish models respectively, while Particle Swarm Optimization (PSO) was introduced to optimize the parameters of SVR models and to search optimal points of models. The evaluation indicator, the sum of curcumin, demethoxycurcumin and bisdemethoxycurcumin by HPLC, were used. The optimal parameters of microwave-assisted extraction were as follows: ethanol concentration of 69%, ratio of liquid to solid of 21 : 1, microwave time of 55 s. On those conditions, the sum of three curcuminoids was 28.97 mg/g (per gram of rhizomes powder). Both the CCD model and the SVR model were credible, for they have predicted the similar process condition and the deviation of yield were less than 1.2%.
Hu, Shaoxing; Xu, Shike; Wang, Duhu; Zhang, Aiwu
2015-11-11
Aiming at addressing the problem of high computational cost of the traditional Kalman filter in SINS/GPS, a practical optimization algorithm with offline-derivation and parallel processing methods based on the numerical characteristics of the system is presented in this paper. The algorithm exploits the sparseness and/or symmetry of matrices to simplify the computational procedure. Thus plenty of invalid operations can be avoided by offline derivation using a block matrix technique. For enhanced efficiency, a new parallel computational mechanism is established by subdividing and restructuring calculation processes after analyzing the extracted "useful" data. As a result, the algorithm saves about 90% of the CPU processing time and 66% of the memory usage needed in a classical Kalman filter. Meanwhile, the method as a numerical approach needs no precise-loss transformation/approximation of system modules and the accuracy suffers little in comparison with the filter before computational optimization. Furthermore, since no complicated matrix theories are needed, the algorithm can be easily transplanted into other modified filters as a secondary optimization method to achieve further efficiency.
FPGA based hardware optimized implementation of signal processing system for LFM pulsed radar
NASA Astrophysics Data System (ADS)
Azim, Noor ul; Jun, Wang
2016-11-01
Signal processing is one of the main parts of any radar system. Different signal processing algorithms are used to extract information about different parameters like range, speed, direction etc, of a target in the field of radar communication. This paper presents LFM (Linear Frequency Modulation) pulsed radar signal processing algorithms which are used to improve target detection, range resolution and to estimate the speed of a target. Firstly, these algorithms are simulated in MATLAB to verify the concept and theory. After the conceptual verification in MATLAB, the simulation is converted into implementation on hardware using Xilinx FPGA. Chosen FPGA is Xilinx Virtex-6 (XC6LVX75T). For hardware implementation pipeline optimization is adopted and also other factors are considered for resources optimization in the process of implementation. Focusing algorithms in this work for improving target detection, range resolution and speed estimation are hardware optimized fast convolution processing based pulse compression and pulse Doppler processing.
Resolving stellar populations with crowded field 3D spectroscopy
NASA Astrophysics Data System (ADS)
Kamann, S.; Wisotzki, L.; Roth, M. M.
2013-01-01
We describe a new method of extracting the spectra of stars from observations of crowded stellar fields with integral field spectroscopy (IFS). Our approach extends the well-established concept of crowded field photometry in images into the domain of 3-dimensional spectroscopic datacubes. The main features of our algorithm follow. (1) We assume that a high-fidelity input source catalogue already exists, e.g. from HST data, and that it is not needed to perform sophisticated source detection in the IFS data. (2) Source positions and properties of the point spread function (PSF) vary smoothly between spectral layers of the datacube, and these variations can be described by simple fitting functions. (3) The shape of the PSF can be adequately described by an analytical function. Even without isolated PSF calibrator stars we can therefore estimate the PSF by a model fit to the full ensemble of stars visible within the field of view. (4) By using sparse matrices to describe the sources, the problem of extracting the spectra of many stars simultaneously becomes computationally tractable. We present extensive performance and validation tests of our algorithm using realistic simulated datacubes that closely reproduce actual IFS observations of the central regions of Galactic globular clusters. We investigate the quality of the extracted spectra under the effects of crowding with respect to the resulting signal-to-noise ratios (S/N) and any possible changes in the continuum level, as well as with respect to absorption line spectral parameters, radial velocities, and equivalent widths. The main effect of blending between two nearby stars is a decrease in the S/N in their spectra. The effect increases with the crowding in the field in a way that the maximum number of stars with useful spectra is always ~0.2 per spatial resolution element. This balance breaks down when exceeding a total source density of one significantly detected star per resolution element. We also explore the effects of PSF mismatch and other systematics. We close with an outlook by applying our method to a simulated globular cluster observation with the upcoming MUSE instrument at the ESO-VLT. Based on observations collected at the Centro Astronómico Hispano Alemán (CAHA) at Calar Alto, operated jointly by the Max-Planck Institut für Astronomie and the Instituto de Astrofísica de Andalucía (CSIC).Based on observations made with the NASA/ESA Hubble Space Telescope, and obtained from the Hubble Legacy Archive, which is a collaboration between the Space Telescope Science Institute (STScI/NASA), the Space Telescope European Coordinating Facility (ST-ECF/ESA), and the Canadian Astronomy Data Centre (CADC/NRC/CSA).
A Memetic Algorithm for Global Optimization of Multimodal Nonseparable Problems.
Zhang, Geng; Li, Yangmin
2016-06-01
It is a big challenging issue of avoiding falling into local optimum especially when facing high-dimensional nonseparable problems where the interdependencies among vector elements are unknown. In order to improve the performance of optimization algorithm, a novel memetic algorithm (MA) called cooperative particle swarm optimizer-modified harmony search (CPSO-MHS) is proposed in this paper, where the CPSO is used for local search and the MHS for global search. The CPSO, as a local search method, uses 1-D swarm to search each dimension separately and thus converges fast. Besides, it can obtain global optimum elements according to our experimental results and analyses. MHS implements the global search by recombining different vector elements and extracting global optimum elements. The interaction between local search and global search creates a set of local search zones, where global optimum elements reside within the search space. The CPSO-MHS algorithm is tested and compared with seven other optimization algorithms on a set of 28 standard benchmarks. Meanwhile, some MAs are also compared according to the results derived directly from their corresponding references. The experimental results demonstrate a good performance of the proposed CPSO-MHS algorithm in solving multimodal nonseparable problems.
Improving Efficiency in Multi-Strange Baryon Reconstruction in d-Au at STAR
NASA Astrophysics Data System (ADS)
Leight, William
2003-10-01
We report preliminary multi-strange baryon measurements for d-Au collisions recorded at RHIC by the STAR experiment. After using classical topological analysis, in which cuts for each discriminating variable are adjusted by hand, we investigate improvements in signal-to-noise optimization using Linear Discriminant Analysis (LDA). LDA is an algorithm for finding, in the n-dimensional space of the n discriminating variables, the axis on which the signal and noise distributions are most separated. LDA is the first step in moving towards more sophisticated techniques for signal-to-noise optimization, such as Artificial Neural Nets. Due to the relatively low background and sufficiently high yields of d-Au collisions, they form an ideal system to study these possibilities for improving reconstruction methods. Such improvements will be extremely important for forthcoming Au-Au runs in which the size of the combinatoric background is a major problem in reconstruction efforts.
Visual attitude propagation for small satellites
NASA Astrophysics Data System (ADS)
Rawashdeh, Samir A.
As electronics become smaller and more capable, it has become possible to conduct meaningful and sophisticated satellite missions in a small form factor. However, the capability of small satellites and the range of possible applications are limited by the capabilities of several technologies, including attitude determination and control systems. This dissertation evaluates the use of image-based visual attitude propagation as a compliment or alternative to other attitude determination technologies that are suitable for miniature satellites. The concept lies in using miniature cameras to track image features across frames and extracting the underlying rotation. The problem of visual attitude propagation as a small satellite attitude determination system is addressed from several aspects: related work, algorithm design, hardware and performance evaluation, possible applications, and on-orbit experimentation. These areas of consideration reflect the organization of this dissertation. A "stellar gyroscope" is developed, which is a visual star-based attitude propagator that uses relative motion of stars in an imager's field of view to infer the attitude changes. The device generates spacecraft relative attitude estimates in three degrees of freedom. Algorithms to perform the star detection, correspondence, and attitude propagation are presented. The Random Sample Consensus (RANSAC) approach is applied to the correspondence problem to successfully pair stars across frames while mitigating falsepositive and false-negative star detections. This approach provides tolerance to the noise levels expected in using miniature optics and no baffling, and the noise caused by radiation dose on orbit. The hardware design and algorithms are validated using test images of the night sky. The application of the stellar gyroscope as part of a CubeSat attitude determination and control system is described. The stellar gyroscope is used to augment a MEMS gyroscope attitude propagation algorithm to minimize drift in the absence of an absolute attitude sensor. The stellar gyroscope is a technology demonstration experiment on KySat-2, a 1-Unit CubeSat being developed in Kentucky that is in line to launch with the NASA ELaNa CubeSat Launch Initiative. It has also been adopted by industry as a sensor for CubeSat Attitude Determination and Control Systems (ADCS). KEYWORDS: Small Satellites, Attitude Determination, Egomotion Estimation, RANSAC, Image Processing.
An intelligent allocation algorithm for parallel processing
NASA Technical Reports Server (NTRS)
Carroll, Chester C.; Homaifar, Abdollah; Ananthram, Kishan G.
1988-01-01
The problem of allocating nodes of a program graph to processors in a parallel processing architecture is considered. The algorithm is based on critical path analysis, some allocation heuristics, and the execution granularity of nodes in a program graph. These factors, and the structure of interprocessor communication network, influence the allocation. To achieve realistic estimations of the executive durations of allocations, the algorithm considers the fact that nodes in a program graph have to communicate through varying numbers of tokens. Coarse and fine granularities have been implemented, with interprocessor token-communication duration, varying from zero up to values comparable to the execution durations of individual nodes. The effect on allocation of communication network structures is demonstrated by performing allocations for crossbar (non-blocking) and star (blocking) networks. The algorithm assumes the availability of as many processors as it needs for the optimal allocation of any program graph. Hence, the focus of allocation has been on varying token-communication durations rather than varying the number of processors. The algorithm always utilizes as many processors as necessary for the optimal allocation of any program graph, depending upon granularity and characteristics of the interprocessor communication network.
Denimal, Emmanuel; Marin, Ambroise; Guyot, Stéphane; Journaux, Ludovic; Molin, Paul
2015-08-01
In biology, hemocytometers such as Malassez slides are widely used and are effective tools for counting cells manually. In a previous work, a robust algorithm was developed for grid extraction in Malassez slide images. This algorithm was evaluated on a set of 135 images and grids were accurately detected in most cases, but there remained failures for the most difficult images. In this work, we present an optimization of this algorithm that allows for 100% grid detection and a 25% improvement in grid positioning accuracy. These improvements make the algorithm fully reliable for grid detection. This optimization also allows complete erasing of the grid without altering the cells, which eases their segmentation.
An improved dehazing algorithm of aerial high-definition image
NASA Astrophysics Data System (ADS)
Jiang, Wentao; Ji, Ming; Huang, Xiying; Wang, Chao; Yang, Yizhou; Li, Tao; Wang, Jiaoying; Zhang, Ying
2016-01-01
For unmanned aerial vehicle(UAV) images, the sensor can not get high quality images due to fog and haze weather. To solve this problem, An improved dehazing algorithm of aerial high-definition image is proposed. Based on the model of dark channel prior, the new algorithm firstly extracts the edges from crude estimated transmission map and expands the extracted edges. Then according to the expended edges, the algorithm sets a threshold value to divide the crude estimated transmission map into different areas and makes different guided filter on the different areas compute the optimized transmission map. The experimental results demonstrate that the performance of the proposed algorithm is substantially the same as the one based on dark channel prior and guided filter. The average computation time of the new algorithm is around 40% of the one as well as the detection ability of UAV image is improved effectively in fog and haze weather.
Wen, Tingxi; Zhang, Zhongnan
2017-01-01
Abstract In this paper, genetic algorithm-based frequency-domain feature search (GAFDS) method is proposed for the electroencephalogram (EEG) analysis of epilepsy. In this method, frequency-domain features are first searched and then combined with nonlinear features. Subsequently, these features are selected and optimized to classify EEG signals. The extracted features are analyzed experimentally. The features extracted by GAFDS show remarkable independence, and they are superior to the nonlinear features in terms of the ratio of interclass distance and intraclass distance. Moreover, the proposed feature search method can search for features of instantaneous frequency in a signal after Hilbert transformation. The classification results achieved using these features are reasonable; thus, GAFDS exhibits good extensibility. Multiple classical classifiers (i.e., k-nearest neighbor, linear discriminant analysis, decision tree, AdaBoost, multilayer perceptron, and Naïve Bayes) achieve satisfactory classification accuracies by using the features generated by the GAFDS method and the optimized feature selection. The accuracies for 2-classification and 3-classification problems may reach up to 99% and 97%, respectively. Results of several cross-validation experiments illustrate that GAFDS is effective in the extraction of effective features for EEG classification. Therefore, the proposed feature selection and optimization model can improve classification accuracy. PMID:28489789
Wen, Tingxi; Zhang, Zhongnan
2017-05-01
In this paper, genetic algorithm-based frequency-domain feature search (GAFDS) method is proposed for the electroencephalogram (EEG) analysis of epilepsy. In this method, frequency-domain features are first searched and then combined with nonlinear features. Subsequently, these features are selected and optimized to classify EEG signals. The extracted features are analyzed experimentally. The features extracted by GAFDS show remarkable independence, and they are superior to the nonlinear features in terms of the ratio of interclass distance and intraclass distance. Moreover, the proposed feature search method can search for features of instantaneous frequency in a signal after Hilbert transformation. The classification results achieved using these features are reasonable; thus, GAFDS exhibits good extensibility. Multiple classical classifiers (i.e., k-nearest neighbor, linear discriminant analysis, decision tree, AdaBoost, multilayer perceptron, and Naïve Bayes) achieve satisfactory classification accuracies by using the features generated by the GAFDS method and the optimized feature selection. The accuracies for 2-classification and 3-classification problems may reach up to 99% and 97%, respectively. Results of several cross-validation experiments illustrate that GAFDS is effective in the extraction of effective features for EEG classification. Therefore, the proposed feature selection and optimization model can improve classification accuracy.
Automated Design Tools for Integrated Mixed-Signal Microsystems (NeoCAD)
2005-02-01
method, Model Order Reduction (MOR) tools, system-level, mixed-signal circuit synthesis and optimization tools, and parsitic extraction tools. A unique...Mission Area: Command and Control mixed signal circuit simulation parasitic extraction time-domain simulation IC design flow model order reduction... Extraction 1.2 Overall Program Milestones CHAPTER 2 FAST TIME DOMAIN MIXED-SIGNAL CIRCUIT SIMULATION 2.1 HAARSPICE Algorithms 2.1.1 Mathematical Background
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cao Daliang; Earl, Matthew A.; Luan, Shuang
2006-04-15
A new leaf-sequencing approach has been developed that is designed to reduce the number of required beam segments for step-and-shoot intensity modulated radiation therapy (IMRT). This approach to leaf sequencing is called continuous-intensity-map-optimization (CIMO). Using a simulated annealing algorithm, CIMO seeks to minimize differences between the optimized and sequenced intensity maps. Two distinguishing features of the CIMO algorithm are (1) CIMO does not require that each optimized intensity map be clustered into discrete levels and (2) CIMO is not rule-based but rather simultaneously optimizes both the aperture shapes and weights. To test the CIMO algorithm, ten IMRT patient cases weremore » selected (four head-and-neck, two pancreas, two prostate, one brain, and one pelvis). For each case, the optimized intensity maps were extracted from the Pinnacle{sup 3} treatment planning system. The CIMO algorithm was applied, and the optimized aperture shapes and weights were loaded back into Pinnacle. A final dose calculation was performed using Pinnacle's convolution/superposition based dose calculation. On average, the CIMO algorithm provided a 54% reduction in the number of beam segments as compared with Pinnacle's leaf sequencer. The plans sequenced using the CIMO algorithm also provided improved target dose uniformity and a reduced discrepancy between the optimized and sequenced intensity maps. For ten clinical intensity maps, comparisons were performed between the CIMO algorithm and the power-of-two reduction algorithm of Xia and Verhey [Med. Phys. 25(8), 1424-1434 (1998)]. When the constraints of a Varian Millennium multileaf collimator were applied, the CIMO algorithm resulted in a 26% reduction in the number of segments. For an Elekta multileaf collimator, the CIMO algorithm resulted in a 67% reduction in the number of segments. An average leaf sequencing time of less than one minute per beam was observed.« less
Tele-Autonomous control involving contact. Final Report Thesis; [object localization
NASA Technical Reports Server (NTRS)
Shao, Lejun; Volz, Richard A.; Conway, Lynn; Walker, Michael W.
1990-01-01
Object localization and its application in tele-autonomous systems are studied. Two object localization algorithms are presented together with the methods of extracting several important types of object features. The first algorithm is based on line-segment to line-segment matching. Line range sensors are used to extract line-segment features from an object. The extracted features are matched to corresponding model features to compute the location of the object. The inputs of the second algorithm are not limited only to the line features. Featured points (point to point matching) and featured unit direction vectors (vector to vector matching) can also be used as the inputs of the algorithm, and there is no upper limit on the number of the features inputed. The algorithm will allow the use of redundant features to find a better solution. The algorithm uses dual number quaternions to represent the position and orientation of an object and uses the least squares optimization method to find an optimal solution for the object's location. The advantage of using this representation is that the method solves for the location estimation by minimizing a single cost function associated with the sum of the orientation and position errors and thus has a better performance on the estimation, both in accuracy and speed, than that of other similar algorithms. The difficulties when the operator is controlling a remote robot to perform manipulation tasks are also discussed. The main problems facing the operator are time delays on the signal transmission and the uncertainties of the remote environment. How object localization techniques can be used together with other techniques such as predictor display and time desynchronization to help to overcome these difficulties are then discussed.
Statistical process control using optimized neural networks: a case study.
Addeh, Jalil; Ebrahimzadeh, Ata; Azarbad, Milad; Ranaee, Vahid
2014-09-01
The most common statistical process control (SPC) tools employed for monitoring process changes are control charts. A control chart demonstrates that the process has altered by generating an out-of-control signal. This study investigates the design of an accurate system for the control chart patterns (CCPs) recognition in two aspects. First, an efficient system is introduced that includes two main modules: feature extraction module and classifier module. In the feature extraction module, a proper set of shape features and statistical feature are proposed as the efficient characteristics of the patterns. In the classifier module, several neural networks, such as multilayer perceptron, probabilistic neural network and radial basis function are investigated. Based on an experimental study, the best classifier is chosen in order to recognize the CCPs. Second, a hybrid heuristic recognition system is introduced based on cuckoo optimization algorithm (COA) algorithm to improve the generalization performance of the classifier. The simulation results show that the proposed algorithm has high recognition accuracy. Copyright © 2013 ISA. Published by Elsevier Ltd. All rights reserved.
Endmember extraction from hyperspectral image based on discrete firefly algorithm (EE-DFA)
NASA Astrophysics Data System (ADS)
Zhang, Chengye; Qin, Qiming; Zhang, Tianyuan; Sun, Yuanheng; Chen, Chao
2017-04-01
This study proposed a novel method to extract endmembers from hyperspectral image based on discrete firefly algorithm (EE-DFA). Endmembers are the input of many spectral unmixing algorithms. Hence, in this paper, endmember extraction from hyperspectral image is regarded as a combinational optimization problem to get best spectral unmixing results, which can be solved by the discrete firefly algorithm. Two series of experiments were conducted on the synthetic hyperspectral datasets with different SNR and the AVIRIS Cuprite dataset, respectively. The experimental results were compared with the endmembers extracted by four popular methods: the sequential maximum angle convex cone (SMACC), N-FINDR, Vertex Component Analysis (VCA), and Minimum Volume Constrained Nonnegative Matrix Factorization (MVC-NMF). What's more, the effect of the parameters in the proposed method was tested on both synthetic hyperspectral datasets and AVIRIS Cuprite dataset, and the recommended parameters setting was proposed. The results in this study demonstrated that the proposed EE-DFA method showed better performance than the existing popular methods. Moreover, EE-DFA is robust under different SNR conditions.
Blessy, S A Praylin Selva; Sulochana, C Helen
2015-01-01
Segmentation of brain tumor from Magnetic Resonance Imaging (MRI) becomes very complicated due to the structural complexities of human brain and the presence of intensity inhomogeneities. To propose a method that effectively segments brain tumor from MR images and to evaluate the performance of unsupervised optimal fuzzy clustering (UOFC) algorithm for segmentation of brain tumor from MR images. Segmentation is done by preprocessing the MR image to standardize intensity inhomogeneities followed by feature extraction, feature fusion and clustering. Different validation measures are used to evaluate the performance of the proposed method using different clustering algorithms. The proposed method using UOFC algorithm produces high sensitivity (96%) and low specificity (4%) compared to other clustering methods. Validation results clearly show that the proposed method with UOFC algorithm effectively segments brain tumor from MR images.
Data reduction and calibration for LAMOST survey
NASA Astrophysics Data System (ADS)
Luo, Ali; Zhang, Jiannan; Chen, Jianjun; Song, Yihan; Wu, Yue; Bai, Zhongrui; Wang, Fengfei; Du, Bing; Zhang, Haotong
2014-01-01
There are three data pipelines for LAMOST survey. The raw data is reduced to one dimension spectra by the data reduction pipeline(2D pipeline), the extracted spectra are classified and measured by the spectral analysis pipeline(1D pipeline), while stellar parameters are measured by LASP pipeline. (a) The data reduction pipeline. The main tasks of the data reduction pipeline include bias calibration, flat field, spectra extraction, sky subtraction, wavelength calibration, exposure merging and wavelength band connection. (b) The spectra analysis pipeline. This pipeline is designed to classify and identify objects from the extracted spectra and to measure their redshift (or radial velocity). The PCAZ (Glazebrook et al. 1998) method is applied to do the classification and redshift measurement. (c) Stellar parameters LASP. Stellar parameters pipeline (LASP) is to estimate stellar atmospheric parameters, e.g. effective temperature Teff, surface gravity log g, and metallicity [Fe/H], for F, G and K type stars. To effectively determine those fundamental stellar measurements, three steps with different methods are employed. The first step utilizes the line indices to approximately define the effective temperature range of the analyzed star. Secondly, a set of the initial approximate values of the three parameters are given based on template fitting method. Finally, we exploit ULySS (Koleva et al. 2009) to give the final values of parameters through minimizing the χ 2 value between the observed spectrum and a multidimensional grid of model spectra which is generated by an interpolating of ELODIE library. There are two other classification for A type star and M type star. For A type star, standard MK system is employed (Gray et al. 2009) to give each object temperature class and luminosity type. For M type star, they are classified into subclasses by an improved Hammer method, and metallicity of each objects is also given. During the pilot survey, algorithms were improved and the pipelines were tested. The products of LAMOST survey will include extracted and calibrated spectra in FITS format, a catalog of FGK stars with stellar parameters, a catalog of M dwarf with subclass and metallicity, and a catalog of A type star with MK classification. A part of the pilot survey data, including about 319 000 high quality spectra with SNR > 10, a catalog of stellar parameters of FGK stars and another catalog of a subclass of M type stars have been released to the public in August 2012 (Luo et al. 2012). The general survey started from October 2012, and completed the first year survey. The formal data release one (DR1) is being prepared, which will include both pilot survey and first year general survey, and planed to be released under the LAMOST data policy.
A seismic fault recognition method based on ant colony optimization
NASA Astrophysics Data System (ADS)
Chen, Lei; Xiao, Chuangbai; Li, Xueliang; Wang, Zhenli; Huo, Shoudong
2018-05-01
Fault recognition is an important section in seismic interpretation and there are many methods for this technology, but no one can recognize fault exactly enough. For this problem, we proposed a new fault recognition method based on ant colony optimization which can locate fault precisely and extract fault from the seismic section. Firstly, seismic horizons are extracted by the connected component labeling algorithm; secondly, the fault location are decided according to the horizontal endpoints of each horizon; thirdly, the whole seismic section is divided into several rectangular blocks and the top and bottom endpoints of each rectangular block are considered as the nest and food respectively for the ant colony optimization algorithm. Besides that, the positive section is taken as an actual three dimensional terrain by using the seismic amplitude as a height. After that, the optimal route from nest to food calculated by the ant colony in each block is judged as a fault. Finally, extensive comparative tests were performed on the real seismic data. Availability and advancement of the proposed method were validated by the experimental results.
Novo, Leonardo; Chakraborty, Shantanav; Mohseni, Masoud; Neven, Hartmut; Omar, Yasser
2015-01-01
Continuous time quantum walks provide an important framework for designing new algorithms and modelling quantum transport and state transfer problems. Often, the graph representing the structure of a problem contains certain symmetries that confine the dynamics to a smaller subspace of the full Hilbert space. In this work, we use invariant subspace methods, that can be computed systematically using the Lanczos algorithm, to obtain the reduced set of states that encompass the dynamics of the problem at hand without the specific knowledge of underlying symmetries. First, we apply this method to obtain new instances of graphs where the spatial quantum search algorithm is optimal: complete graphs with broken links and complete bipartite graphs, in particular, the star graph. These examples show that regularity and high-connectivity are not needed to achieve optimal spatial search. We also show that this method considerably simplifies the calculation of quantum transport efficiencies. Furthermore, we observe improved efficiencies by removing a few links from highly symmetric graphs. Finally, we show that this reduction method also allows us to obtain an upper bound for the fidelity of a single qubit transfer on an XY spin network. PMID:26330082
Feng, Haihua; Karl, William Clem; Castañon, David A
2008-05-01
In this paper, we develop a new unified approach for laser radar range anomaly suppression, range profiling, and segmentation. This approach combines an object-based hybrid scene model for representing the range distribution of the field and a statistical mixture model for the range data measurement noise. The image segmentation problem is formulated as a minimization problem which jointly estimates the target boundary together with the target region range variation and background range variation directly from the noisy and anomaly-filled range data. This formulation allows direct incorporation of prior information concerning the target boundary, target ranges, and background ranges into an optimal reconstruction process. Curve evolution techniques and a generalized expectation-maximization algorithm are jointly employed as an efficient solver for minimizing the objective energy, resulting in a coupled pair of object and intensity optimization tasks. The method directly and optimally extracts the target boundary, avoiding a suboptimal two-step process involving image smoothing followed by boundary extraction. Experiments are presented demonstrating that the proposed approach is robust to anomalous pixels (missing data) and capable of producing accurate estimation of the target boundary and range values from noisy data.
Reconstructing the Dwarf Galaxy Progenitor from Tidal Streams Using MilkyWay@home
NASA Astrophysics Data System (ADS)
Newberg, Heidi; Shelton, Siddhartha
2018-04-01
We attempt to reconstruct the mass and radial profile of stars and dark matter in the dwarf galaxy progenitor of the Orphan Stream, using only information from the stars in the Orphan Stream. We show that given perfect data and perfect knowledge of the dwarf galaxy profile and Milky Way potential, we are able to reconstruct the mass and radial profiles of both the stars and dark matter in the progenitor to high accuracy using only the density of stars along the stream and either the velocity dispersion or width of the stream in the sky. To perform this test, we simulated the tidal disruption of a two component (stars and dark matter) dwarf galaxy along the orbit of the Orphan Stream. We then created a histogram of the density of stars along the stream and a histogram of either the velocity dispersion or width of the stream in the sky as a function of position along the stream. The volunteer supercomputer MilkyWay@home was given these two histograms, the Milky Way potential model, and the orbital parameters for the progenitor. N-body simulations were run, varying dwarf galaxy parameters and the time of disruption. The goodness-of-fit of the model to the data was determined using an Earth-Mover Distance algorithm. The parameters were optimized using Differential Evolution. Future work will explore whether currently available information on the Orphan Stream stars is sufficient to constrain its progenitor, and how sensitive the optimization is to our knowledge of the Milky Way potential and the density model of the dwarf galaxy progenitor, as well as a host of other real-life unknowns.
The IceAge ERS Program: Probing Building blocks of Life During the JWST Era
NASA Astrophysics Data System (ADS)
McClure, Melissa K.; Boogert, Adwin; Linnartz, Harold; Beck, Tracy L.; van Dishoeck, Ewine; Egami, Eiichi; Garrod, Robin; Gordon, Karl D.; Palumbo, Maria Elisabetta; Brown, Wendy; Fraser, Helen; Ioppolo, Sergio; Jimenez-Serra, Izaskun; McCoustra, Martin; Noble, Jennifer; Pendleton, Yvonne J.; Pontoppidan, Klaus; Viti, Serena; Chiar, Jean E.; Caselli, Paola; Bailey, John Ira; Jorgensen, Jes; Kristensen, Lars; Murillo, Nadia; Oberg, Karin I.; IceAge ERS Team Collaborators
2018-06-01
Icy grain mantles are the main reservoir for volatile elements in star-forming regions across the Universe, as well as the formation site of pre-biotic complex organic molecules (COMs) seen in our Solar System. Through the IceAge Early Release Science program, we will trace the evolution of pristine and complex ice chemistry in a representative low-mass star-forming region through observations of a: pre-stellar core, Class 0 protostar, Class I protostar, and protoplanetary disk. Comparing high spectral resolution (R~1500-3000) and sensitivity (S/N~100-300) observations from 3 to 15 micron to template spectra, we will map the spatial distribution of ices down to ~20-50 AU in these targets to identify when, and at what visual extinction, the formation of each ice species begins. Such high-resolution spectra will allow us to search for new COMs, as well as distinguish between different ice morphologies, thermal histories, and mixing environments.The analysis of these data will result in science products beneficial to Cycle 2 proposers. A newly updated public laboratory ice database will provide feature identifications for all of the expected ices, while a chemical model fit to the observed ice abundances will be released publically as a grid, with varied metallicity and UV fields to simulate other environments. We will create improved algorithms to extract NIRCAM WFSS spectra in crowded fields with extended sources as well as optimize the defringing of MIRI LRS spectra in order to recover broad spectral features. We anticipate that these resources will be particularly useful for astrochemistry and spectroscopy of fainter, extended targets like star forming regions of the SMC/LMC or more distant galaxies.
An Energy-Aware Trajectory Optimization Layer for sUAS
NASA Astrophysics Data System (ADS)
Silva, William A.
The focus of this work is the implementation of an energy-aware trajectory optimization algorithm that enables small unmanned aircraft systems (sUAS) to operate in unknown, dynamic severe weather environments. The software is designed as a component of an Energy-Aware Dynamic Data Driven Application System (EA-DDDAS) for sUAS. This work addresses the challenges of integrating and executing an online trajectory optimization algorithm during mission operations in the field. Using simplified aircraft kinematics, the energy-aware algorithm enables extraction of kinetic energy from measured winds to optimize thrust use and endurance during flight. The optimization layer, based upon a nonlinear program formulation, extracts energy by exploiting strong wind velocity gradients in the wind field, a process known as dynamic soaring. The trajectory optimization layer extends the energy-aware path planner developed by Wenceslao Shaw-Cortez te{Shaw-cortez2013} to include additional mission configurations, simulations with a 6-DOF model, and validation of the system with flight testing in June 2015 in Lubbock, Texas. The trajectory optimization layer interfaces with several components within the EA-DDDAS to provide an sUAS with optimal flight trajectories in real-time during severe weather. As a result, execution timing, data transfer, and scalability are considered in the design of the software. Severe weather also poses a measure of unpredictability to the system with respect to communication between systems and available data resources during mission operations. A heuristic mission tree with different cost functions and constraints is implemented to provide a level of adaptability to the optimization layer. Simulations and flight experiments are performed to assess the efficacy of the trajectory optimization layer. The results are used to assess the feasibility of flying dynamic soaring trajectories with existing controllers as well as to verify the interconnections between EA-DDDAS components. Results also demonstrate the usage of the trajectory optimization layer in conjunction with a lattice-based path planner as a method of guiding the optimization layer and stitching together subsequent trajectories.
NASA Astrophysics Data System (ADS)
Bay, Annick; Mayer, Alexandre
2014-09-01
The efficiency of light-emitting diodes (LED) has increased significantly over the past few years, but the overall efficiency is still limited by total internal reflections due to the high dielectric-constant contrast between the incident and emergent media. The bioluminescent organ of fireflies gave incentive for light-extraction enhance-ment studies. A specific factory-roof shaped structure was shown, by means of light-propagation simulations and measurements, to enhance light extraction significantly. In order to achieve a similar effect for light-emitting diodes, the structure needs to be adapted to the specific set-up of LEDs. In this context simulations were carried out to determine the best geometrical parameters. In the present work, the search for a geometry that maximizes the extraction of light has been conducted by using a genetic algorithm. The idealized structure considered previously was generalized to a broader variety of shapes. The genetic algorithm makes it possible to search simultaneously over a wider range of parameters. It is also significantly less time-consuming than the previous approach that was based on a systematic scan on parameters. The results of the genetic algorithm show that (1) the calculations can be performed in a smaller amount of time and (2) the light extraction can be enhanced even more significantly by using optimal parameters determined by the genetic algorithm for the generalized structure. The combination of the genetic algorithm with the Rigorous Coupled Waves Analysis method constitutes a strong simulation tool, which provides us with adapted designs for enhancing light extraction from light-emitting diodes.
Ant-cuckoo colony optimization for feature selection in digital mammogram.
Jona, J B; Nagaveni, N
2014-01-15
Digital mammogram is the only effective screening method to detect the breast cancer. Gray Level Co-occurrence Matrix (GLCM) textural features are extracted from the mammogram. All the features are not essential to detect the mammogram. Therefore identifying the relevant feature is the aim of this work. Feature selection improves the classification rate and accuracy of any classifier. In this study, a new hybrid metaheuristic named Ant-Cuckoo Colony Optimization a hybrid of Ant Colony Optimization (ACO) and Cuckoo Search (CS) is proposed for feature selection in Digital Mammogram. ACO is a good metaheuristic optimization technique but the drawback of this algorithm is that the ant will walk through the path where the pheromone density is high which makes the whole process slow hence CS is employed to carry out the local search of ACO. Support Vector Machine (SVM) classifier with Radial Basis Kernal Function (RBF) is done along with the ACO to classify the normal mammogram from the abnormal mammogram. Experiments are conducted in miniMIAS database. The performance of the new hybrid algorithm is compared with the ACO and PSO algorithm. The results show that the hybrid Ant-Cuckoo Colony Optimization algorithm is more accurate than the other techniques.
SEARCHING FOR THE HR 8799 DEBRIS DISK WITH HST /STIS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gerard, B.; Marois, C.; Tannock, M.
We present a new algorithm for space telescope high contrast imaging of close-to-face-on planetary disks called Optimized Spatially Filtered (OSFi) normalization. This algorithm is used on HR 8799 Hubble Space Telescope (HST) Space Telescope Imaging Spectrograph (STIS) coronagraphic archival data, showing an over-luminosity after reference star point-spread function (PSF) subtraction that may be from the inner disk and/or planetesimal belt components of this system. The PSF-subtracted radial profiles in two separate epochs from 2011 and 2012 are consistent with one another, and self-subtraction shows no residual in both epochs. We explore a number of possible false-positive scenarios that could explainmore » this residual flux, including telescope breathing, spectral differences between HR 8799 and the reference star, imaging of the known warm inner disk component, OSFi algorithm throughput and consistency with the standard spider normalization HST PSF subtraction technique, and coronagraph misalignment from pointing accuracy. In comparison to another similar STIS data set, we find that the over-luminosity is likely a result of telescope breathing and spectral difference between HR 8799 and the reference star. Thus, assuming a non-detection, we derive upper limits on the HR 8799 dust belt mass in small grains. In this scenario, we find that the flux of these micron-sized dust grains leaving the system due to radiation pressure is small enough to be consistent with measurements of other debris disk halos.« less
Wang, ShaoPeng; Zhang, Yu-Hang; Huang, GuoHua; Chen, Lei; Cai, Yu-Dong
2017-01-01
Myristoylation is an important hydrophobic post-translational modification that is covalently bound to the amino group of Gly residues on the N-terminus of proteins. The many diverse functions of myristoylation on proteins, such as membrane targeting, signal pathway regulation and apoptosis, are largely due to the lipid modification, whereas abnormal or irregular myristoylation on proteins can lead to several pathological changes in the cell. To better understand the function of myristoylated sites and to correctly identify them in protein sequences, this study conducted a novel computational investigation on identifying myristoylation sites in protein sequences. A training dataset with 196 positive and 84 negative peptide segments were obtained. Four types of features derived from the peptide segments following the myristoylation sites were used to specify myristoylatedand non-myristoylated sites. Then, feature selection methods including maximum relevance and minimum redundancy (mRMR), incremental feature selection (IFS), and a machine learning algorithm (extreme learning machine method) were adopted to extract optimal features for the algorithm to identify myristoylation sites in protein sequences, thereby building an optimal prediction model. As a result, 41 key features were extracted and used to build an optimal prediction model. The effectiveness of the optimal prediction model was further validated by its performance on a test dataset. Furthermore, detailed analyses were also performed on the extracted 41 features to gain insight into the mechanism of myristoylation modification. This study provided a new computational method for identifying myristoylation sites in protein sequences. We believe that it can be a useful tool to predict myristoylation sites from protein sequences. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.
Galileo spacecraft autonomous attitude determination using a V-slit star scanner
NASA Technical Reports Server (NTRS)
Mobasser, Sohrab; Lin, Shuh-Ren
1991-01-01
The autonomous attitude determination system of Galileo spacecraft, consisting of a radiation hardened star scanner and a processing algorithm is presented. The algorithm applying to this system are the sequential star identification and attitude estimation. The star scanner model is reviewed in detail and the flight software parameters that must be updated frequently during flight, due to degradation of the scanner response and the star background change are identified.
-redshifted), Observed Flux, Statistical Error (Based on the optimal extraction algorithm of the IRAF packages were acquired using different instrumental settings for the blue and red parts of the spectrum to avoid extracted for systematics checks of the wavelength calibration. Wavelength and flux calibration were applied
Rapid solution of large-scale systems of equations
NASA Technical Reports Server (NTRS)
Storaasli, Olaf O.
1994-01-01
The analysis and design of complex aerospace structures requires the rapid solution of large systems of linear and nonlinear equations, eigenvalue extraction for buckling, vibration and flutter modes, structural optimization and design sensitivity calculation. Computers with multiple processors and vector capabilities can offer substantial computational advantages over traditional scalar computer for these analyses. These computers fall into two categories: shared memory computers and distributed memory computers. This presentation covers general-purpose, highly efficient algorithms for generation/assembly or element matrices, solution of systems of linear and nonlinear equations, eigenvalue and design sensitivity analysis and optimization. All algorithms are coded in FORTRAN for shared memory computers and many are adapted to distributed memory computers. The capability and numerical performance of these algorithms will be addressed.
A rigorous comparison of different planet detection algorithms
NASA Astrophysics Data System (ADS)
Tingley, B.
2003-05-01
The idea of finding extrasolar planets (ESPs) through observations of drops in stellar brightness due to transiting objects has been around for decades. It has only been in the last ten years, however, that any serious attempts to find ESPs became practical. The discovery of a transiting planet around the star HD 209458 (Charbonneau et al. \\cite{charbonneau}) has led to a veritable explosion of research, because the photometric method is the only way to search a large number of stars for ESPs simultaneously with current technology. To this point, however, there has been limited research into the various techniques used to extract the subtle transit signals from noise, mainly brief summaries in various papers focused on publishing transit-like signatures in observations. The scheduled launches over the next few years of satellites whose primary or secondary science missions will be ESP discovery motivates a review and a comparative study of the various algorithms used to perform the transit identification, to determine rigorously and fairly which one is the most sensitive under which circumstances, to maximize the results of past, current, and future observational campaigns.
Enhanced Automated Guidance System for Horizontal Auger Boring Based on Image Processing
Wu, Lingling; Wen, Guojun; Wang, Yudan; Huang, Lei; Zhou, Jiang
2018-01-01
Horizontal auger boring (HAB) is a widely used trenchless technology for the high-accuracy installation of gravity or pressure pipelines on line and grade. Differing from other pipeline installations, HAB requires a more precise and automated guidance system for use in a practical project. This paper proposes an economic and enhanced automated optical guidance system, based on optimization research of light-emitting diode (LED) light target and five automated image processing bore-path deviation algorithms. An LED target was optimized for many qualities, including light color, filter plate color, luminous intensity, and LED layout. The image preprocessing algorithm, feature extraction algorithm, angle measurement algorithm, deflection detection algorithm, and auto-focus algorithm, compiled in MATLAB, are used to automate image processing for deflection computing and judging. After multiple indoor experiments, this guidance system is applied in a project of hot water pipeline installation, with accuracy controlled within 2 mm in 48-m distance, providing accurate line and grade controls and verifying the feasibility and reliability of the guidance system. PMID:29462855
New Techniques for High-Contrast Imaging with ADI: The ACORNS-ADI SEEDS Data Reduction Pipeline
NASA Technical Reports Server (NTRS)
Brandt, Timothy D.; McElwain, Michael W.; Turner, Edwin L.; Abe, L.; Brandner, W.; Carson, J.; Egner, S.; Feldt, M.; Golota, T.; Grady, C. A.;
2012-01-01
We describe Algorithms for Calibration, Optimized Registration, and Nulling the Star in Angular Differential Imaging (ACORNS-ADI), a new, parallelized software package to reduce high-contrast imaging data, and its application to data from the Strategic Exploration of Exoplanets and Disks (SEEDS) survey. We implement seyeral new algorithms, includbg a method to centroid saturated images, a trimmed mean for combining an image sequence that reduces noise by up to approx 20%, and a robust and computationally fast method to compute the sensitivitv of a high-contrast obsen-ation everywhere on the field-of-view without introducing artificial sources. We also include a description of image processing steps to remove electronic artifacts specific to Hawaii2-RG detectors like the one used for SEEDS, and a detailed analysis of the Locally Optimized Combination of Images (LOCI) algorithm commonly used to reduce high-contrast imaging data. ACORNS-ADI is efficient and open-source, and includes several optional features which may improve performance on data from other instruments. ACORNS-ADI is freely available for download at www.github.com/t-brandt/acorns_-adi under a BSD license
NASA Astrophysics Data System (ADS)
Brekhna, Brekhna; Mahmood, Arif; Zhou, Yuanfeng; Zhang, Caiming
2017-11-01
Superpixels have gradually become popular in computer vision and image processing applications. However, no comprehensive study has been performed to evaluate the robustness of superpixel algorithms in regard to common forms of noise in natural images. We evaluated the robustness of 11 recently proposed algorithms to different types of noise. The images were corrupted with various degrees of Gaussian blur, additive white Gaussian noise, and impulse noise that either made the object boundaries weak or added extra information to it. We performed a robustness analysis of simple linear iterative clustering (SLIC), Voronoi Cells (VCells), flooding-based superpixel generation (FCCS), bilateral geodesic distance (Bilateral-G), superpixel via geodesic distance (SSS-G), manifold SLIC (M-SLIC), Turbopixels, superpixels extracted via energy-driven sampling (SEEDS), lazy random walk (LRW), real-time superpixel segmentation by DBSCAN clustering, and video supervoxels using partially absorbing random walks (PARW) algorithms. The evaluation process was carried out both qualitatively and quantitatively. For quantitative performance comparison, we used achievable segmentation accuracy (ASA), compactness, under-segmentation error (USE), and boundary recall (BR) on the Berkeley image database. The results demonstrated that all algorithms suffered performance degradation due to noise. For Gaussian blur, Bilateral-G exhibited optimal results for ASA and USE measures, SLIC yielded optimal compactness, whereas FCCS and DBSCAN remained optimal for BR. For the case of additive Gaussian and impulse noises, FCCS exhibited optimal results for ASA, USE, and BR, whereas Bilateral-G remained a close competitor in ASA and USE for Gaussian noise only. Additionally, Turbopixel demonstrated optimal performance for compactness for both types of noise. Thus, no single algorithm was able to yield optimal results for all three types of noise across all performance measures. Conclusively, to solve real-world problems effectively, more robust superpixel algorithms must be developed.
Star/Galaxy Separation in Hyper Suprime-Cam and Mapping the Milky Way with Star Counts
NASA Astrophysics Data System (ADS)
Garmilla, Jose Antonio
We study the problem of separating stars and galaxies in the Hyper Suprime-Cam (HSC) multi-band imaging data at high galactic latitudes. We show that the current separation technique implemented in the HSC pipeline is unable to produce samples of stars with i 24 without a significant contamination from galaxies (> 50%). We study various methods for measuring extendedness in HSC with simulated and real data and find that there are a number of available techniques that give nearly optimal results; the extendedness measure HSC is currently using is among these. We develop a star/galaxy separation method for HSC based on the Extreme Deconvolution (XD) algorithm that uses colors and extendedness simultaneously, and show that with it we can generate samples of faint stars keeping contamination from galaxies under control to i ≤ 25. We apply our star/galaxy separation method to carry out a preliminary study of the structure of the Milky Way (MW) with main sequence (MS) stars using photometric parallax relations derived for the HSC photometric system. We show that it will be possible to generate a tomography of the MW stellar halo to galactocentric radii ˜ 100 kpc with ˜ 106 MS stars in the HSC Wide layer once the survey has been completed. We report two potential detections of the Sagittarius tidal stream with MS stars in the XMM and GAMA15 fields at ≈ 20 kpc and ≈ 40 kpc respectively.
Optimized emission in nanorod arrays through quasi-aperiodic inverse design.
Anderson, P Duke; Povinelli, Michelle L
2015-06-01
We investigate a new class of quasi-aperiodic nanorod structures for the enhancement of incoherent light emission. We identify one optimized structure using an inverse design algorithm and the finite-difference time-domain method. We carry out emission calculations on both the optimized structure as well as a simple periodic array. The optimized structure achieves nearly perfect light extraction while maintaining a high spontaneous emission rate. Overall, the optimized structure can achieve a 20%-42% increase in external quantum efficiency relative to a simple periodic design, depending on material quality.
A Sparsity-Promoted Method Based on Majorization-Minimization for Weak Fault Feature Enhancement
Hao, Yansong; Song, Liuyang; Tang, Gang; Yuan, Hongfang
2018-01-01
Fault transient impulses induced by faulty components in rotating machinery usually contain substantial interference. Fault features are comparatively weak in the initial fault stage, which renders fault diagnosis more difficult. In this case, a sparse representation method based on the Majorzation-Minimization (MM) algorithm is proposed to enhance weak fault features and extract the features from strong background noise. However, the traditional MM algorithm suffers from two issues, which are the choice of sparse basis and complicated calculations. To address these challenges, a modified MM algorithm is proposed in which a sparse optimization objective function is designed firstly. Inspired by the Basis Pursuit (BP) model, the optimization function integrates an impulsive feature-preserving factor and a penalty function factor. Second, a modified Majorization iterative method is applied to address the convex optimization problem of the designed function. A series of sparse coefficients can be achieved through iterating, which only contain transient components. It is noteworthy that there is no need to select the sparse basis in the proposed iterative method because it is fixed as a unit matrix. Then the reconstruction step is omitted, which can significantly increase detection efficiency. Eventually, envelope analysis of the sparse coefficients is performed to extract weak fault features. Simulated and experimental signals including bearings and gearboxes are employed to validate the effectiveness of the proposed method. In addition, comparisons are made to prove that the proposed method outperforms the traditional MM algorithm in terms of detection results and efficiency. PMID:29597280
A Sparsity-Promoted Method Based on Majorization-Minimization for Weak Fault Feature Enhancement.
Ren, Bangyue; Hao, Yansong; Wang, Huaqing; Song, Liuyang; Tang, Gang; Yuan, Hongfang
2018-03-28
Fault transient impulses induced by faulty components in rotating machinery usually contain substantial interference. Fault features are comparatively weak in the initial fault stage, which renders fault diagnosis more difficult. In this case, a sparse representation method based on the Majorzation-Minimization (MM) algorithm is proposed to enhance weak fault features and extract the features from strong background noise. However, the traditional MM algorithm suffers from two issues, which are the choice of sparse basis and complicated calculations. To address these challenges, a modified MM algorithm is proposed in which a sparse optimization objective function is designed firstly. Inspired by the Basis Pursuit (BP) model, the optimization function integrates an impulsive feature-preserving factor and a penalty function factor. Second, a modified Majorization iterative method is applied to address the convex optimization problem of the designed function. A series of sparse coefficients can be achieved through iterating, which only contain transient components. It is noteworthy that there is no need to select the sparse basis in the proposed iterative method because it is fixed as a unit matrix. Then the reconstruction step is omitted, which can significantly increase detection efficiency. Eventually, envelope analysis of the sparse coefficients is performed to extract weak fault features. Simulated and experimental signals including bearings and gearboxes are employed to validate the effectiveness of the proposed method. In addition, comparisons are made to prove that the proposed method outperforms the traditional MM algorithm in terms of detection results and efficiency.
Zhang, Tao; Gao, Feng; Muhamedsalih, Hussam; Lou, Shan; Martin, Haydn; Jiang, Xiangqian
2018-03-20
The phase slope method which estimates height through fringe pattern frequency and the algorithm which estimates height through the fringe phase are the fringe analysis algorithms widely used in interferometry. Generally they both extract the phase information by filtering the signal in frequency domain after Fourier transform. Among the numerous papers in the literature about these algorithms, it is found that the design of the filter, which plays an important role, has never been discussed in detail. This paper focuses on the filter design in these algorithms for wavelength scanning interferometry (WSI), trying to optimize the parameters to acquire the optimal results. The spectral characteristics of the interference signal are analyzed first. The effective signal is found to be narrow-band (near single frequency), and the central frequency is calculated theoretically. Therefore, the position of the filter pass-band is determined. The width of the filter window is optimized with the simulation to balance the elimination of the noise and the ringing of the filter. Experimental validation of the approach is provided, and the results agree very well with the simulation. The experiment shows that accuracy can be improved by optimizing the filter design, especially when the signal quality, i.e., the signal noise ratio (SNR), is low. The proposed method also shows the potential of improving the immunity to the environmental noise by adapting the signal to acquire the optimal results through designing an adaptive filter once the signal SNR can be estimated accurately.
Genetic algorithm for the optimization of features and neural networks in ECG signals classification
NASA Astrophysics Data System (ADS)
Li, Hongqiang; Yuan, Danyang; Ma, Xiangdong; Cui, Dianyin; Cao, Lu
2017-01-01
Feature extraction and classification of electrocardiogram (ECG) signals are necessary for the automatic diagnosis of cardiac diseases. In this study, a novel method based on genetic algorithm-back propagation neural network (GA-BPNN) for classifying ECG signals with feature extraction using wavelet packet decomposition (WPD) is proposed. WPD combined with the statistical method is utilized to extract the effective features of ECG signals. The statistical features of the wavelet packet coefficients are calculated as the feature sets. GA is employed to decrease the dimensions of the feature sets and to optimize the weights and biases of the back propagation neural network (BPNN). Thereafter, the optimized BPNN classifier is applied to classify six types of ECG signals. In addition, an experimental platform is constructed for ECG signal acquisition to supply the ECG data for verifying the effectiveness of the proposed method. The GA-BPNN method with the MIT-BIH arrhythmia database achieved a dimension reduction of nearly 50% and produced good classification results with an accuracy of 97.78%. The experimental results based on the established acquisition platform indicated that the GA-BPNN method achieved a high classification accuracy of 99.33% and could be efficiently applied in the automatic identification of cardiac arrhythmias.
Ahirwal, M K; Kumar, Anil; Singh, G K
2013-01-01
This paper explores the migration of adaptive filtering with swarm intelligence/evolutionary techniques employed in the field of electroencephalogram/event-related potential noise cancellation or extraction. A new approach is proposed in the form of controlled search space to stabilize the randomness of swarm intelligence techniques especially for the EEG signal. Swarm-based algorithms such as Particles Swarm Optimization, Artificial Bee Colony, and Cuckoo Optimization Algorithm with their variants are implemented to design optimized adaptive noise canceler. The proposed controlled search space technique is tested on each of the swarm intelligence techniques and is found to be more accurate and powerful. Adaptive noise canceler with traditional algorithms such as least-mean-square, normalized least-mean-square, and recursive least-mean-square algorithms are also implemented to compare the results. ERP signals such as simulated visual evoked potential, real visual evoked potential, and real sensorimotor evoked potential are used, due to their physiological importance in various EEG studies. Average computational time and shape measures of evolutionary techniques are observed 8.21E-01 sec and 1.73E-01, respectively. Though, traditional algorithms take negligible time consumption, but are unable to offer good shape preservation of ERP, noticed as average computational time and shape measure difference, 1.41E-02 sec and 2.60E+00, respectively.
Isosurface Extraction in Time-Varying Fields Using a Temporal Hierarchical Index Tree
NASA Technical Reports Server (NTRS)
Shen, Han-Wei; Gerald-Yamasaki, Michael (Technical Monitor)
1998-01-01
Many high-performance isosurface extraction algorithms have been proposed in the past several years as a result of intensive research efforts. When applying these algorithms to large-scale time-varying fields, the storage overhead incurred from storing the search index often becomes overwhelming. this paper proposes an algorithm for locating isosurface cells in time-varying fields. We devise a new data structure, called Temporal Hierarchical Index Tree, which utilizes the temporal coherence that exists in a time-varying field and adoptively coalesces the cells' extreme values over time; the resulting extreme values are then used to create the isosurface cell search index. For a typical time-varying scalar data set, not only does this temporal hierarchical index tree require much less storage space, but also the amount of I/O required to access the indices from the disk at different time steps is substantially reduced. We illustrate the utility and speed of our algorithm with data from several large-scale time-varying CID simulations. Our algorithm can achieve more than 80% of disk-space savings when compared with the existing techniques, while the isosurface extraction time is nearly optimal.
A robust star identification algorithm with star shortlisting
NASA Astrophysics Data System (ADS)
Mehta, Deval Samirbhai; Chen, Shoushun; Low, Kay Soon
2018-05-01
A star tracker provides the most accurate attitude solution in terms of arc seconds compared to the other existing attitude sensors. When no prior attitude information is available, it operates in "Lost-In-Space (LIS)" mode. Star pattern recognition, also known as star identification algorithm, forms the most crucial part of a star tracker in the LIS mode. Recognition reliability and speed are the two most important parameters of a star pattern recognition technique. In this paper, a novel star identification algorithm with star ID shortlisting is proposed. Firstly, the star IDs are shortlisted based on worst-case patch mismatch, and later stars are identified in the image by an initial match confirmed with a running sequential angular match technique. The proposed idea is tested on 16,200 simulated star images having magnitude uncertainty, noise stars, positional deviation, and varying size of the field of view. The proposed idea is also benchmarked with the state-of-the-art star pattern recognition techniques. Finally, the real-time performance of the proposed technique is tested on the 3104 real star images captured by a star tracker SST-20S currently mounted on a satellite. The proposed technique can achieve an identification accuracy of 98% and takes only 8.2 ms for identification on real images. Simulation and real-time results depict that the proposed technique is highly robust and achieves a high speed of identification suitable for actual space applications.
Ortho Image and DTM Generation with Intelligent Methods
NASA Astrophysics Data System (ADS)
Bagheri, H.; Sadeghian, S.
2013-10-01
Nowadays the artificial intelligent algorithms has considered in GIS and remote sensing. Genetic algorithm and artificial neural network are two intelligent methods that are used for optimizing of image processing programs such as edge extraction and etc. these algorithms are very useful for solving of complex program. In this paper, the ability and application of genetic algorithm and artificial neural network in geospatial production process like geometric modelling of satellite images for ortho photo generation and height interpolation in raster Digital Terrain Model production process is discussed. In first, the geometric potential of Ikonos-2 and Worldview-2 with rational functions, 2D & 3D polynomials were tested. Also comprehensive experiments have been carried out to evaluate the viability of the genetic algorithm for optimization of rational function, 2D & 3D polynomials. Considering the quality of Ground Control Points, the accuracy (RMSE) with genetic algorithm and 3D polynomials method for Ikonos-2 Geo image was 0.508 pixel sizes and the accuracy (RMSE) with GA algorithm and rational function method for Worldview-2 image was 0.930 pixel sizes. For more another optimization artificial intelligent methods, neural networks were used. With the use of perceptron network in Worldview-2 image, a result of 0.84 pixel sizes with 4 neurons in middle layer was gained. The final conclusion was that with artificial intelligent algorithms it is possible to optimize the existing models and have better results than usual ones. Finally the artificial intelligence methods, like genetic algorithms as well as neural networks, were examined on sample data for optimizing interpolation and for generating Digital Terrain Models. The results then were compared with existing conventional methods and it appeared that these methods have a high capacity in heights interpolation and that using these networks for interpolating and optimizing the weighting methods based on inverse distance leads to a high accurate estimation of heights.
Billeci, Lucia; Varanini, Maurizio
2017-01-01
The non-invasive fetal electrocardiogram (fECG) technique has recently received considerable interest in monitoring fetal health. The aim of our paper is to propose a novel fECG algorithm based on the combination of the criteria of independent source separation and of a quality index optimization (ICAQIO-based). The algorithm was compared with two methods applying the two different criteria independently—the ICA-based and the QIO-based methods—which were previously developed by our group. All three methods were tested on the recently implemented Fetal ECG Synthetic Database (FECGSYNDB). Moreover, the performance of the algorithm was tested on real data from the PhysioNet fetal ECG Challenge 2013 Database. The proposed combined method outperformed the other two algorithms on the FECGSYNDB (ICAQIO-based: 98.78%, QIO-based: 97.77%, ICA-based: 97.61%). Significant differences were obtained in particular in the conditions when uterine contractions and maternal and fetal ectopic beats occurred. On the real data, all three methods obtained very high performances, with the QIO-based method proving slightly better than the other two (ICAQIO-based: 99.38%, QIO-based: 99.76%, ICA-based: 99.37%). The findings from this study suggest that the proposed method could potentially be applied as a novel algorithm for accurate extraction of fECG, especially in critical recording conditions. PMID:28509860
SpecOp: Optimal Extraction Software for Integral Field Unit Spectrographs
NASA Astrophysics Data System (ADS)
McCarron, Adam; Ciardullo, Robin; Eracleous, Michael
2018-01-01
The Hobby-Eberly Telescope’s new low resolution integral field spectrographs, LRS2-B and LRS2-R, each cover a 12”x6” area on the sky with 280 fibers and generate spectra with resolutions between R=1100 and R=1900. To extract 1-D spectra from the instrument’s 3D data cubes, a program is needed that is flexible enough to work for a wide variety of targets, including continuum point sources, emission line sources, and compact sources embedded in complex backgrounds. We therefore introduce SpecOp, a user-friendly python program for optimally extracting spectra from integral-field unit spectrographs. As input, SpecOp takes a sky-subtracted data cube consisting of images at each wavelength increment set by the instrument’s spectral resolution, and an error file for each count measurement. All of these files are generated by the current LRS2 reduction pipeline. The program then collapses the cube in the image plane using the optimal extraction algorithm detailed by Keith Horne (1986). The various user-selected options include the fraction of the total signal enclosed in a contour-defined region, the wavelength range to analyze, and the precision of the spatial profile calculation. SpecOp can output the weighted counts and errors at each wavelength in various table formats using python’s astropy package. We outline the algorithm used for extraction and explain how the software can be used to easily obtain high-quality 1-D spectra. We demonstrate the utility of the program by applying it to spectra of a variety of quasars and AGNs. In some of these targets, we extract the spectrum of a nuclear point source that is superposed on a spatially extended galaxy.
APPHi: Automated Photometry Pipeline for High Cadence Large Volume Data
NASA Astrophysics Data System (ADS)
Sánchez, E.; Castro, J.; Silva, J.; Hernández, J.; Reyes, M.; Hernández, B.; Alvarez, F.; García T.
2018-04-01
APPHi (Automated Photometry Pipeline) carries out aperture and differential photometry of TAOS-II project data. It is computationally efficient and can be used also with other astronomical wide-field image data. APPHi works with large volumes of data and handles both FITS and HDF5 formats. Due the large number of stars that the software has to handle in an enormous number of frames, it is optimized to automatically find the best value for parameters to carry out the photometry, such as mask size for aperture, size of window for extraction of a single star, and the number of counts for the threshold for detecting a faint star. Although intended to work with TAOS-II data, APPHi can analyze any set of astronomical images and is a robust and versatile tool to performing stellar aperture and differential photometry.
Flattening maps for the visualization of multibranched vessels.
Zhu, Lei; Haker, Steven; Tannenbaum, Allen
2005-02-01
In this paper, we present two novel algorithms which produce flattened visualizations of branched physiological surfaces, such as vessels. The first approach is a conformal mapping algorithm based on the minimization of two Dirichlet functionals. From a triangulated representation of vessel surfaces, we show how the algorithm can be implemented using a finite element technique. The second method is an algorithm which adjusts the conformal mapping to produce a flattened representation of the original surface while preserving areas. This approach employs the theory of optimal mass transport. Furthermore, a new way of extracting center lines for vessel fly-throughs is provided.
Flattening Maps for the Visualization of Multibranched Vessels
Zhu, Lei; Haker, Steven; Tannenbaum, Allen
2013-01-01
In this paper, we present two novel algorithms which produce flattened visualizations of branched physiological surfaces, such as vessels. The first approach is a conformal mapping algorithm based on the minimization of two Dirichlet functionals. From a triangulated representation of vessel surfaces, we show how the algorithm can be implemented using a finite element technique. The second method is an algorithm which adjusts the conformal mapping to produce a flattened representation of the original surface while preserving areas. This approach employs the theory of optimal mass transport. Furthermore, a new way of extracting center lines for vessel fly-throughs is provided. PMID:15707245
Data Mining and Optimization Tools for Developing Engine Parameters Tools
NASA Technical Reports Server (NTRS)
Dhawan, Atam P.
1998-01-01
This project was awarded for understanding the problem and developing a plan for Data Mining tools for use in designing and implementing an Engine Condition Monitoring System. Tricia Erhardt and I studied the problem domain for developing an Engine Condition Monitoring system using the sparse and non-standardized datasets to be available through a consortium at NASA Lewis Research Center. We visited NASA three times to discuss additional issues related to dataset which was not made available to us. We discussed and developed a general framework of data mining and optimization tools to extract useful information from sparse and non-standard datasets. These discussions lead to the training of Tricia Erhardt to develop Genetic Algorithm based search programs which were written in C++ and used to demonstrate the capability of GA algorithm in searching an optimal solution in noisy, datasets. From the study and discussion with NASA LeRC personnel, we then prepared a proposal, which is being submitted to NASA for future work for the development of data mining algorithms for engine conditional monitoring. The proposed set of algorithm uses wavelet processing for creating multi-resolution pyramid of tile data for GA based multi-resolution optimal search.
Zhou, Fuqiang; Su, Zhen; Chai, Xinghua; Chen, Lipeng
2014-01-01
This paper proposes a new method to detect and identify foreign matter mixed in a plastic bottle filled with transfusion solution. A spin-stop mechanism and mixed illumination style are applied to obtain high contrast images between moving foreign matter and a static transfusion background. The Gaussian mixture model is used to model the complex background of the transfusion image and to extract moving objects. A set of features of moving objects are extracted and selected by the ReliefF algorithm, and optimal feature vectors are fed into the back propagation (BP) neural network to distinguish between foreign matter and bubbles. The mind evolutionary algorithm (MEA) is applied to optimize the connection weights and thresholds of the BP neural network to obtain a higher classification accuracy and faster convergence rate. Experimental results show that the proposed method can effectively detect visible foreign matter in 250-mL transfusion bottles. The misdetection rate and false alarm rate are low, and the detection accuracy and detection speed are satisfactory. PMID:25347581
Spatial-time-state fusion algorithm for defect detection through eddy current pulsed thermography
NASA Astrophysics Data System (ADS)
Xiao, Xiang; Gao, Bin; Woo, Wai Lok; Tian, Gui Yun; Xiao, Xiao Ting
2018-05-01
Eddy Current Pulsed Thermography (ECPT) has received extensive attention due to its high sensitive of detectability on surface and subsurface cracks. However, it remains as a difficult challenge in unsupervised detection as to identify defects without knowing any prior knowledge. This paper presents a spatial-time-state features fusion algorithm to obtain fully profile of the defects by directional scanning. The proposed method is intended to conduct features extraction by using independent component analysis (ICA) and automatic features selection embedding genetic algorithm. Finally, the optimal feature of each step is fused to obtain defects reconstruction by applying common orthogonal basis extraction (COBE) method. Experiments have been conducted to validate the study and verify the efficacy of the proposed method on blind defect detection.
Spectral analysis of early-type stars using a genetic algorithm based fitting method
NASA Astrophysics Data System (ADS)
Mokiem, M. R.; de Koter, A.; Puls, J.; Herrero, A.; Najarro, F.; Villamariz, M. R.
2005-10-01
We present the first automated fitting method for the quantitative spectroscopy of O- and early B-type stars with stellar winds. The method combines the non-LTE stellar atmosphere code fastwind from Puls et al. (2005, A&A, 435, 669) with the genetic algorithm based optimization routine pikaia from Charbonneau (1995, ApJS, 101, 309), allowing for a homogeneous analysis of upcoming large samples of early-type stars (e.g. Evans et al. 2005, A&A, 437, 467). In this first implementation we use continuum normalized optical hydrogen and helium lines to determine photospheric and wind parameters. We have assigned weights to these lines accounting for line blends with species not taken into account, lacking physics, and/or possible or potential problems in the model atmosphere code. We find the method to be robust, fast, and accurate. Using our method we analysed seven O-type stars in the young cluster Cyg OB2 and five other Galactic stars with high rotational velocities and/or low mass loss rates (including 10 Lac, ζ Oph, and τ Sco) that have been studied in detail with a previous version of fastwind. The fits are found to have a quality that is comparable or even better than produced by the classical “by eye” method. We define errorbars on the model parameters based on the maximum variations of these parameters in the models that cluster around the global optimum. Using this concept, for the investigated dataset we are able to recover mass-loss rates down to ~6 × 10-8~M⊙ yr-1 to within an error of a factor of two, ignoring possible systematic errors due to uncertainties in the continuum normalization. Comparison of our derived spectroscopic masses with those derived from stellar evolutionary models are in very good agreement, i.e. based on the limited sample that we have studied we do not find indications for a mass discrepancy. For three stars we find significantly higher surface gravities than previously reported. We identify this to be due to differences in the weighting of Balmer line wings between our automated method and “by eye” fitting and/or an improved multidimensional optimization of the parameters. The empirical modified wind momentum relation constructed on the basis of the stars analysed here agrees to within the error bars with the theoretical relation predicted by Vink et al. (2000, A&A, 362, 295), including those cases for which the winds are weak (i.e. less than a few times 10-7 M⊙ yr-1).
Modelisation et optimisation des systemes energetiques a l'aide d'algorithmes evolutifs
NASA Astrophysics Data System (ADS)
Hounkonnou, Sessinou M. William
Optimization of thermal and nuclear plant has many economics advantages as well as environmentals. Therefore new operating points research and use of new tools to achieve those kind of optimization are the subject of many studies. In this momentum, this project is intended to optimize energetic systems precisely the secondary loop of Gentilly 2 nuclear plant using both the extraction of the high and low pressure turbine as well as the extraction of the mixture coming from the steam generator. A detailed thermodynamic model of the various equipment of the secondary loop such as the feed water heaters, the moisture separator-reheater, the dearator, the condenser and the turbine is carried out. We use Matlab software (version R2007b, 2007) with the library for the thermodynamic properties of water and steam (XSteam pour Matlab, Holmgren, 2006). A model of the secondary loop is than obtained thanks to the assembly of the different equipments. A simulation of the equipment and the complete cycle enabled us to release two objectifs functions knowing as the net output and the efficiency which evolve in an opposite way according to the variation of the extractions. Due to the complexity of the problem, we use a method based on the genetic algorithms for the optimization. More precisely we used a tool which was developed at the "Institut de genie nucleaire" named BEST (Boundary Exploration Search Technique) developed in VBA* (Visual BASIC for Application) for its ability to converge more quickly and to carry out a more exhaustive search at the border of the optimal solutions. The use of the DDE (Dynamic Data Exchange) enables us to link the simulator and the optimizer. The results obtained show us that they still exists several combinations of extractions which make it possible to obtain a better point of operation for the improvement of the performance of Gentilly 2 power station secondary loop. *Trademark of Microsoft
Kim, Heejun; Bian, Jiantao; Mostafa, Javed; Jonnalagadda, Siddhartha; Del Fiol, Guilherme
2016-01-01
Motivation: Clinicians need up-to-date evidence from high quality clinical trials to support clinical decisions. However, applying evidence from the primary literature requires significant effort. Objective: To examine the feasibility of automatically extracting key clinical trial information from ClinicalTrials.gov. Methods: We assessed the coverage of ClinicalTrials.gov for high quality clinical studies that are indexed in PubMed. Using 140 random ClinicalTrials.gov records, we developed and tested rules for the automatic extraction of key information. Results: The rate of high quality clinical trial registration in ClinicalTrials.gov increased from 0.2% in 2005 to 17% in 2015. Trials reporting results increased from 3% in 2005 to 19% in 2015. The accuracy of the automatic extraction algorithm for 10 trial attributes was 90% on average. Future research is needed to improve the algorithm accuracy and to design information displays to optimally present trial information to clinicians.
Extracting atmospheric turbulence and aerosol characteristics from passive imagery
NASA Astrophysics Data System (ADS)
Reinhardt, Colin N.; Wayne, D.; McBryde, K.; Cauble, G.
2013-09-01
Obtaining accurate, precise and timely information about the local atmospheric turbulence and extinction conditions and aerosol/particulate content remains a difficult problem with incomplete solutions. It has important applications in areas such as optical and IR free-space communications, imaging systems performance, and the propagation of directed energy. The capability to utilize passive imaging data to extract parameters characterizing atmospheric turbulence and aerosol/particulate conditions would represent a valuable addition to the current piecemeal toolset for atmospheric sensing. Our research investigates an application of fundamental results from optical turbulence theory and aerosol extinction theory combined with recent advances in image-quality-metrics (IQM) and image-quality-assessment (IQA) methods. We have developed an algorithm which extracts important parameters used for characterizing atmospheric turbulence and extinction along the propagation channel, such as the refractive-index structure parameter C2n , the Fried atmospheric coherence width r0 , and the atmospheric extinction coefficient βext , from passive image data. We will analyze the algorithm performance using simulations based on modeling with turbulence modulation transfer functions. An experimental field campaign was organized and data were collected from passive imaging through turbulence of Siemens star resolution targets over several short littoral paths in Point Loma, San Diego, under conditions various turbulence intensities. We present initial results of the algorithm's effectiveness using this field data and compare against measurements taken concurrently with other standard atmospheric characterization equipment. We also discuss some of the challenges encountered with the algorithm, tasks currently in progress, and approaches planned for improving the performance in the near future.
A Fault Recognition System for Gearboxes of Wind Turbines
NASA Astrophysics Data System (ADS)
Yang, Zhiling; Huang, Haiyue; Yin, Zidong
2017-12-01
Costs of maintenance and loss of power generation caused by the faults of wind turbines gearboxes are the main components of operation costs for a wind farm. Therefore, the technology of condition monitoring and fault recognition for wind turbines gearboxes is becoming a hot topic. A condition monitoring and fault recognition system (CMFRS) is presented for CBM of wind turbines gearboxes in this paper. The vibration signals from acceleration sensors at different locations of gearbox and the data from supervisory control and data acquisition (SCADA) system are collected to CMFRS. Then the feature extraction and optimization algorithm is applied to these operational data. Furthermore, to recognize the fault of gearboxes, the GSO-LSSVR algorithm is proposed, combining the least squares support vector regression machine (LSSVR) with the Glowworm Swarm Optimization (GSO) algorithm. Finally, the results show that the fault recognition system used in this paper has a high rate for identifying three states of wind turbines’ gears; besides, the combination of date features can affect the identifying rate and the selection optimization algorithm presented in this paper can get a pretty good date feature subset for the fault recognition.
Galileo Attitude Determination: Experiences with a Rotating Star Scanner
NASA Technical Reports Server (NTRS)
Merken, L.; Singh, G.
1991-01-01
The Galileo experience with a rotating star scanner is discussed in terms of problems encountered in flight, solutions implemented, and lessons learned. An overview of the Galileo project and the attitude and articulation control subsystem is given and the star scanner hardware and relevant software algorithms are detailed. The star scanner is the sole source of inertial attitude reference for this spacecraft. Problem symptoms observed in flight are discussed in terms of effects on spacecraft performance and safety. Sources of thse problems include contributions from flight software idiosyncrasies and inadequate validation of the ground procedures used to identify target stars for use by the autonomous on-board star identification algorithm. Problem fixes (some already implemented and some only proposed) are discussed. A general conclusion is drawn regarding the inherent difficulty of performing simulation tests to validate algorithms which are highly sensitive to external inputs of statistically 'rare' events.
Measuring Dark Matter With MilkyWay@home
NASA Astrophysics Data System (ADS)
Shelton, Siddhartha; Newberg, Heidi Jo; Arsenault, Matthew; Bauer, Jacob; Desell, Travis; Judd, Roland; Magdon-Ismail, Malik; Newby, Matthew; Rice, Colin; Thompson, Jeffrey; Ulin, Steve; Weiss, Jake; Widrow, Larry
2016-01-01
We perform N-body simulations of two component dwarf galaxies (dark matter and stars follow separate distributions) falling into the Milky Way and the forming of tidal streams. Using MilkyWay@home we optimize the parameters of the progenitor dwarf galaxy and the orbital time to fit the simulated distribution of stars along the tidal stream to the observed distribution of stars. Our initial dwarf galaxy models are constructed with two separate Plummer profiles (one for the dark matter and one for the baryonic matter), sampled using a generalized distribution function for spherically symmetric systems. We perform rigorous testing to ensure that our simulated galaxies are in virial equilibrium, and stable over a simulation time. The N-body simulations are performed using a Barnes-Hut Tree algorithm. Optimization traverses the likelihood surface from our six model parameters using particle swarm and differential evolution methods. We have generated simulated data with known model parameters that are similar to those of the Orphan Stream. We show that we are able to recover a majority of our model parameters, and most importantly the mass-to-light ratio of the now disrupted progenitor galaxy, using MilkyWay@home. This research is supported by generous gifts from the Marvin Clan, Babette Josephs, Manit Limlamai, and the MilkyWay@home volunteers.
NASA Astrophysics Data System (ADS)
Heidari, A. A.; Moayedi, A.; Abbaspour, R. Ali
2017-09-01
Automated fare collection (AFC) systems are regarded as valuable resources for public transport planners. In this paper, the AFC data are utilized to analysis and extract mobility patterns in a public transportation system. For this purpose, the smart card data are inserted into a proposed metaheuristic-based aggregation model and then converted to O-D matrix between stops, since the size of O-D matrices makes it difficult to reproduce the measured passenger flows precisely. The proposed strategy is applied to a case study from Haaglanden, Netherlands. In this research, moth-flame optimizer (MFO) is utilized and evaluated for the first time as a new metaheuristic algorithm (MA) in estimating transit origin-destination matrices. The MFO is a novel, efficient swarm-based MA inspired from the celestial navigation of moth insects in nature. To investigate the capabilities of the proposed MFO-based approach, it is compared to methods that utilize the K-means algorithm, gray wolf optimization algorithm (GWO) and genetic algorithm (GA). The sum of the intra-cluster distances and computational time of operations are considered as the evaluation criteria to assess the efficacy of the optimizers. The optimality of solutions of different algorithms is measured in detail. The traveler's behavior is analyzed to achieve to a smooth and optimized transport system. The results reveal that the proposed MFO-based aggregation strategy can outperform other evaluated approaches in terms of convergence tendency and optimality of the results. The results show that it can be utilized as an efficient approach to estimating the transit O-D matrices.
Particle Swarm Optimization With Interswarm Interactive Learning Strategy.
Qin, Quande; Cheng, Shi; Zhang, Qingyu; Li, Li; Shi, Yuhui
2016-10-01
The learning strategy in the canonical particle swarm optimization (PSO) algorithm is often blamed for being the primary reason for loss of diversity. Population diversity maintenance is crucial for preventing particles from being stuck into local optima. In this paper, we present an improved PSO algorithm with an interswarm interactive learning strategy (IILPSO) by overcoming the drawbacks of the canonical PSO algorithm's learning strategy. IILPSO is inspired by the phenomenon in human society that the interactive learning behavior takes place among different groups. Particles in IILPSO are divided into two swarms. The interswarm interactive learning (IIL) behavior is triggered when the best particle's fitness value of both the swarms does not improve for a certain number of iterations. According to the best particle's fitness value of each swarm, the softmax method and roulette method are used to determine the roles of the two swarms as the learning swarm and the learned swarm. In addition, the velocity mutation operator and global best vibration strategy are used to improve the algorithm's global search capability. The IIL strategy is applied to PSO with global star and local ring structures, which are termed as IILPSO-G and IILPSO-L algorithm, respectively. Numerical experiments are conducted to compare the proposed algorithms with eight popular PSO variants. From the experimental results, IILPSO demonstrates the good performance in terms of solution accuracy, convergence speed, and reliability. Finally, the variations of the population diversity in the entire search process provide an explanation why IILPSO performs effectively.
NASA Technical Reports Server (NTRS)
Smith, R. E.; Pitts, J. I.; Lambiotte, J. J., Jr.
1978-01-01
The computer program FLO-22 for analyzing inviscid transonic flow past 3-D swept-wing configurations was modified to use vector operations and run on the STAR-100 computer. The vectorized version described herein was called FLO-22-V1. Vector operations were incorporated into Successive Line Over-Relaxation in the transformed horizontal direction. Vector relational operations and control vectors were used to implement upwind differencing at supersonic points. A high speed of computation and extended grid domain were characteristics of FLO-22-V1. The new program was not the optimal vectorization of Successive Line Over-Relaxation applied to transonic flow; however, it proved that vector operations can readily be implemented to increase the computation rate of the algorithm.
NASA Technical Reports Server (NTRS)
Woodard, Mark; Rohrbaugh, Dave
1995-01-01
The Advanced Composition Explorer (ACE) spacecraft is designed to fly in a spin-stabilized attitude. The spacecraft will carry two attitude sensors - a digital fine Sun sensor and a charge coupled device (CCD) star tracker - to allow ground-based determination of the spacecraft attitude and spin rate. Part of the processing that must be performed on the CCD star tracker data is the star identification. Star data received from the spacecraft must be matched with star information in the SKYMAP catalog to determine exactly which stars the sensor is tracking. This information, along with the Sun vector measured by the Sun sensor, is used to determine the spacecraft attitude. Several existing star identification (star ID) systems were examined to determine whether they could be modified for use on the ACE mission. Star ID systems which exist for three-axis stabilized spacecraft tend to be complex in nature and many require fairly good knowledge of the spacecraft attitude, making their use for ACE excessive. Star ID systems used for spinners carrying traditional slit star sensors would have to be modified to model the CCD star tracker. The ACE star ID algorithm must also be robust, in that it will be able to correctly identify stars even though the attitude is not known to a high degree of accuracy, and must be very efficient to allow real-time star identification. The paper presents the star ID algorithm that was developed for ACE. Results from prototype testing are also presented to demonstrate the efficiency, accuracy, and robustness of the algorithm.
Automated real-time search and analysis algorithms for a non-contact 3D profiling system
NASA Astrophysics Data System (ADS)
Haynes, Mark; Wu, Chih-Hang John; Beck, B. Terry; Peterman, Robert J.
2013-04-01
The purpose of this research is to develop a new means of identifying and extracting geometrical feature statistics from a non-contact precision-measurement 3D profilometer. Autonomous algorithms have been developed to search through large-scale Cartesian point clouds to identify and extract geometrical features. These algorithms are developed with the intent of providing real-time production quality control of cold-rolled steel wires. The steel wires in question are prestressing steel reinforcement wires for concrete members. The geometry of the wire is critical in the performance of the overall concrete structure. For this research a custom 3D non-contact profilometry system has been developed that utilizes laser displacement sensors for submicron resolution surface profiling. Optimizations in the control and sensory system allow for data points to be collected at up to an approximate 400,000 points per second. In order to achieve geometrical feature extraction and tolerancing with this large volume of data, the algorithms employed are optimized for parsing large data quantities. The methods used provide a unique means of maintaining high resolution data of the surface profiles while keeping algorithm running times within practical bounds for industrial application. By a combination of regional sampling, iterative search, spatial filtering, frequency filtering, spatial clustering, and template matching a robust feature identification method has been developed. These algorithms provide an autonomous means of verifying tolerances in geometrical features. The key method of identifying the features is through a combination of downhill simplex and geometrical feature templates. By performing downhill simplex through several procedural programming layers of different search and filtering techniques, very specific geometrical features can be identified within the point cloud and analyzed for proper tolerancing. Being able to perform this quality control in real time provides significant opportunities in cost savings in both equipment protection and waste minimization.
Evaluation of the image quality of telescopes using the star test
NASA Astrophysics Data System (ADS)
Vazquez y Monteil, Sergio; Salazar Romero, Marcos A.; Gale, David M.
2004-10-01
The Point Spread Function (PSF) or star test is one of the main criteria to be considered in the quality of the image formed by a telescope. In a real system the distribution of irradiance in the image of a point source is given by the PSF, a function which is highly sensitive to aberrations. The PSF of a telescope may be determined by measuring the intensity distribution in the image of a star. Alternatively, if we already know the aberrations present in the optical system, then we may use diffraction theory to calculate the function. In this paper we propose a method for determining the wavefront aberrations from the PSF, using Genetic Algorithms to perform an optimization process starting from the PSF instead of the more traditional method of adjusting an aberration polynomial. We show that this method of phase recuperation is immune to noise-induced errors arising during image aquisition and registration. Some practical results are shown.
SHOCKFIND - an algorithm to identify magnetohydrodynamic shock waves in turbulent clouds
NASA Astrophysics Data System (ADS)
Lehmann, Andrew; Federrath, Christoph; Wardle, Mark
2016-11-01
The formation of stars occurs in the dense molecular cloud phase of the interstellar medium. Observations and numerical simulations of molecular clouds have shown that supersonic magnetized turbulence plays a key role for the formation of stars. Simulations have also shown that a large fraction of the turbulent energy dissipates in shock waves. The three families of MHD shocks - fast, intermediate and slow - distinctly compress and heat up the molecular gas, and so provide an important probe of the physical conditions within a turbulent cloud. Here, we introduce the publicly available algorithm, SHOCKFIND, to extract and characterize the mixture of shock families in MHD turbulence. The algorithm is applied to a three-dimensional simulation of a magnetized turbulent molecular cloud, and we find that both fast and slow MHD shocks are present in the simulation. We give the first prediction of the mixture of turbulence-driven MHD shock families in this molecular cloud, and present their distinct distributions of sonic and Alfvénic Mach numbers. Using subgrid one-dimensional models of MHD shocks we estimate that ˜0.03 per cent of the volume of a typical molecular cloud in the Milky Way will be shock heated above 50 K, at any time during the lifetime of the cloud. We discuss the impact of this shock heating on the dynamical evolution of molecular clouds.
Robust polygon recognition method with similarity invariants applied to star identification
NASA Astrophysics Data System (ADS)
Hernández, E. Antonio; Alonso, Miguel A.; Chávez, Edgar; Covarrubias, David H.; Conte, Roberto
2017-02-01
In the star identification process the goal is to recognize a star by using the celestial bodies in its vicinity as context. An additional requirement is to avoid having to perform an exhaustive scan of the star database. In this paper we present a novel approach to star identification using similarity invariants. More specifically, the proposed algorithm defines a polygon for each star, using the neighboring celestial bodies in the field of view as vertices. The mapping is insensitive to similarity transformation; that is, the image of the polygon under the transformation is not affected by rotation, scaling or translations. Each polygon is associated with an essentially unique complex number. We perform an exhaustive experimental validation of the proposed algorithm using synthetic data generated from the star catalog with uniformly-distributed positional noise introduced to each star. The star identification method that we present is proven to be robust, achieving a recognition rate of 99.68% when noise levels of up to ± 424 μ radians are introduced to the location of the stars. In our tests the proposed algorithm proves that if a polygon match is found, it always corresponds to the star under analysis; no mismatches are found. In its present form our method cannot identify polygons in cases where there exist missing or false stars in the analyzed images, in those situations it only indicates that no match was found.
Slingshot dynamics for self-replicating probes and the effect on exploration timescales
NASA Astrophysics Data System (ADS)
Nicholson, Arwen; Forgan, Duncan
2013-10-01
Interstellar probes can carry out slingshot manoeuvres around the stars they visit, gaining a boost in velocity by extracting energy from the star's motion around the Galactic Centre. These manoeuvres carry little to no extra energy cost, and in previous work it has been shown that a single Voyager-like probe exploring the Galaxy does so 100 times faster when carrying out these slingshots than when navigating purely by powered flight (Forgan et al. 2012). We expand on these results by repeating the experiment with self-replicating probes. The probes explore a box of stars representative of the local Solar neighbourhood, to investigate how self-replication affects exploration timescales when compared with a single non-replicating probe. We explore three different scenarios of probe behaviour: (i) standard powered flight to the nearest unvisited star (no slingshot techniques used), (ii) flight to the nearest unvisited star using slingshot techniques and (iii) flight to the next unvisited star that will give the maximum velocity boost under a slingshot trajectory. In all three scenarios, we find that as expected, using self-replicating probes greatly reduces the exploration time, by up to three orders of magnitude for scenarios (i) and (iii) and two orders of magnitude for (ii). The second case (i.e. nearest-star slingshots) remains the most time effective way to explore a population of stars. As the decision-making algorithms for the fleet are simple, unanticipated `race conditions' among probes are set up, causing the exploration time of the final stars to become much longer than necessary. From the scaling of the probes' performance with star number, we conclude that a fleet of self-replicating probes can indeed explore the Galaxy in a sufficiently short time to warrant the existence of the Fermi Paradox.
Trackside acoustic diagnosis of axle box bearing based on kurtosis-optimization wavelet denoising
NASA Astrophysics Data System (ADS)
Peng, Chaoyong; Gao, Xiaorong; Peng, Jianping; Wang, Ai
2018-04-01
As one of the key components of railway vehicles, the operation condition of the axle box bearing has a significant effect on traffic safety. The acoustic diagnosis is more suitable than vibration diagnosis for trackside monitoring. The acoustic signal generated by the train axle box bearing is an amplitude modulation and frequency modulation signal with complex train running noise. Although empirical mode decomposition (EMD) and some improved time-frequency algorithms have proved to be useful in bearing vibration signal processing, it is hard to extract the bearing fault signal from serious trackside acoustic background noises by using those algorithms. Therefore, a kurtosis-optimization-based wavelet packet (KWP) denoising algorithm is proposed, as the kurtosis is the key indicator of bearing fault signal in time domain. Firstly, the geometry based Doppler correction is applied to signals of each sensor, and with the signal superposition of multiple sensors, random noises and impulse noises, which are the interference of the kurtosis indicator, are suppressed. Then, the KWP is conducted. At last, the EMD and Hilbert transform is applied to extract the fault feature. Experiment results indicate that the proposed method consisting of KWP and EMD is superior to the EMD.
Improving CMD Areal Density Analysis: Algorithms and Strategies
NASA Astrophysics Data System (ADS)
Wilson, R. E.
2014-06-01
Essential ideas, successes, and difficulties of Areal Density Analysis (ADA) for color-magnitude diagrams (CMD¡¯s) of resolved stellar populations are examined, with explanation of various algorithms and strategies for optimal performance. A CMDgeneration program computes theoretical datasets with simulated observational error and a solution program inverts the problem by the method of Differential Corrections (DC) so as to compute parameter values from observed magnitudes and colors, with standard error estimates and correlation coefficients. ADA promises not only impersonal results, but also significant saving of labor, especially where a given dataset is analyzed with several evolution models. Observational errors and multiple star systems, along with various single star characteristics and phenomena, are modeled directly via the Functional Statistics Algorithm (FSA). Unlike Monte Carlo, FSA is not dependent on a random number generator. Discussions include difficulties and overall requirements, such as need for fast evolutionary computation and realization of goals within machine memory limits. Degradation of results due to influence of pixelization on derivatives, Initial Mass Function (IMF) quantization, IMF steepness, low Areal Densities (A ), and large variation in A are reduced or eliminated through a variety of schemes that are explained sufficiently for general application. The Levenberg-Marquardt and MMS algorithms for improvement of solution convergence are contained within the DC program. An example of convergence, which typically is very good, is shown in tabular form. A number of theoretical and practical solution issues are discussed, as are prospects for further development.
Aircraft target detection algorithm based on high resolution spaceborne SAR imagery
NASA Astrophysics Data System (ADS)
Zhang, Hui; Hao, Mengxi; Zhang, Cong; Su, Xiaojing
2018-03-01
In this paper, an image classification algorithm for airport area is proposed, which based on the statistical features of synthetic aperture radar (SAR) images and the spatial information of pixels. The algorithm combines Gamma mixture model and MRF. The algorithm using Gamma mixture model to obtain the initial classification result. Pixel space correlation based on the classification results are optimized by the MRF technique. Additionally, morphology methods are employed to extract airport (ROI) region where the suspected aircraft target samples are clarified to reduce the false alarm and increase the detection performance. Finally, this paper presents the plane target detection, which have been verified by simulation test.
Estimation Filter for Alignment of the Spitzer Space Telescope
NASA Technical Reports Server (NTRS)
Bayard, David
2007-01-01
A document presents a summary of an onboard estimation algorithm now being used to calibrate the alignment of the Spitzer Space Telescope (formerly known as the Space Infrared Telescope Facility). The algorithm, denoted the S2P calibration filter, recursively generates estimates of the alignment angles between a telescope reference frame and a star-tracker reference frame. At several discrete times during the day, the filter accepts, as input, attitude estimates from the star tracker and observations taken by the Pointing Control Reference Sensor (a sensor in the field of view of the telescope). The output of the filter is a calibrated quaternion that represents the best current mean-square estimate of the alignment angles between the telescope and the star tracker. The S2P calibration filter incorporates a Kalman filter that tracks six states - two for each of three orthogonal coordinate axes. Although, in principle, one state per axis is sufficient, the use of two states per axis makes it possible to model both short- and long-term behaviors. Specifically, the filter properly models transient learning, characteristic times and bounds of thermomechanical drift, and long-term steady-state statistics, whether calibration measurements are taken frequently or infrequently. These properties ensure that the S2P filter performance is optimal over a broad range of flight conditions, and can be confidently run autonomously over several years of in-flight operation without human intervention.
NASA Astrophysics Data System (ADS)
Roy, Soumen; Sengupta, Anand S.; Thakor, Nilay
2017-05-01
Astrophysical compact binary systems consisting of neutron stars and black holes are an important class of gravitational wave (GW) sources for advanced LIGO detectors. Accurate theoretical waveform models from the inspiral, merger, and ringdown phases of such systems are used to filter detector data under the template-based matched-filtering paradigm. An efficient grid over the parameter space at a fixed minimal match has a direct impact on the overall time taken by these searches. We present a new hybrid geometric-random template placement algorithm for signals described by parameters of two masses and one spin magnitude. Such template banks could potentially be used in GW searches from binary neutron stars and neutron star-black hole systems. The template placement is robust and is able to automatically accommodate curvature and boundary effects with no fine-tuning. We also compare these banks against vanilla stochastic template banks and show that while both are equally efficient in the fitting-factor sense, the bank sizes are ˜25 % larger in the stochastic method. Further, we show that the generation of the proposed hybrid banks can be sped up by nearly an order of magnitude over the stochastic bank. Generic issues related to optimal implementation are discussed in detail. These improvements are expected to directly reduce the computational cost of gravitational wave searches.
The anti-proliferative and anti-angiogenic effect of the methanol extract from brittle star.
Baharara, Javad; Amini, Elaheh; Mousavi, Marzieh
2015-04-01
Anti-angiogenic therapy is a crucial step in cancer treatment. The discovery of new anti-angiogenic compounds from marine organisms has become an attractive concept in anti-cancer therapy. Because little data correlated to the pro- and anti-angiogenic efficacies of Ophiuroidea, which include brittle star, the current study was designed to explore the anti-angiogenic potential of brittle star methanol extract in vitro and in vivo. The anti-proliferative effect of brittle star extract on A2780cp cells was examined by MTT assays, and transcriptional expression of VEGF and b-FGF was evaluated by RT-PCR. In an in vivo model, 40 fertilized Ross eggs were divided into control and three experimental groups. The experimental groups were incubated with brittle star extract at concentrations of 25, 50 and 100 µg/ml, and photographed by photo-stereomicroscopy. Ultimately, numbers and lengths of vessels were measured by Image J software. Data were analyzed with SPSS software (p<0.05). Results illustrated that the brittle star extract exerted a dose- and time-dependent anti-proliferative effect on A2780cp cancer cells. In addition, VEGF and b-FGF expression decreased with brittle star methanol extract treatment. Macroscopic evaluations revealed significant changes in the second and third experimental group compared to controls (p<0.05). These finding revealed the anti-angiogenic effects of brittle star methanol extract in vitro and in vivo confer novel insight into the application of natural marine products in angiogenesis-related pathologies.
NASA Astrophysics Data System (ADS)
Rahmawati, Sitti; Agnesstacia
2014-03-01
This research analyzes the factors that affect the work of the battery from the star fruit extract and the cactus extract. The value voltage and current generated are measure the work of the battery. Voltage measurement based on the electrode distance function, and electrode surface area. Voltage as a surface area electrode function and electrode distance function determined the current density and the voltage generated. From the experimental results obtained that the battery voltage is large enough, it is about 1.8 V for the extract of star fruit, and 1.7 V for the extract of cactus, which means that the juice extract from star fruit and the juice extract of cactus can become an alternative as battery replacement. The measurements with different electrode surface area on the star fruit and cactus extract which has the depth of the electrode 0.5 cm to 4 cm causes a decrease in the electric current generated from 12.5 mA to 1.0 mA, but obtained the same voltage.
False star detection and isolation during star tracking based on improved chi-square tests.
Zhang, Hao; Niu, Yanxiong; Lu, Jiazhen; Yang, Yanqiang; Su, Guohua
2017-08-01
The star sensor is a precise attitude measurement device for a spacecraft. Star tracking is the main and key working mode for a star sensor. However, during star tracking, false stars become an inevitable interference for star sensor applications, which may result in declined measurement accuracy. A false star detection and isolation algorithm in star tracking based on improved chi-square tests is proposed in this paper. Two estimations are established based on a Kalman filter and a priori information, respectively. The false star detection is operated through adopting the global state chi-square test in a Kalman filter. The false star isolation is achieved using a local state chi-square test. Semi-physical experiments under different trajectories with various false stars are designed for verification. Experiment results show that various false stars can be detected and isolated from navigation stars during star tracking, and the attitude measurement accuracy is hardly influenced by false stars. The proposed algorithm is proved to have an excellent performance in terms of speed, stability, and robustness.
Method for Location of An External Dump in Surface Mining Using the A-Star Algorithm
NASA Astrophysics Data System (ADS)
Zajączkowski, Maciej; Kasztelewicz, Zbigniew; Sikora, Mateusz
2014-10-01
The construction of a surface mine always involves the necessity of accessing deposits through the removal of the residual overburden above. In the beginning phase of exploitation, the masses of overburden are located outside the perimeters of the excavation site, on the external dump, until the moment of internal dumping. In the case of lignite surface mines, these dumps can cover a ground surface of several dozen to a few thousand hectares. This results from a high concentration of lignite extraction, counted in millions of Mg per year, and the relatively large depth of its residual deposits. Determining the best place for the location of an external dump requires a detailed analysis of existing options, followed by a choice of the most favorable one. This article, using the case study of an open-cast lignite mine, presents the selection method for an external dump location based on graph theory and the A-star algorithm. This algorithm, based on the spatial distribution of individual intersections on the graph, seeks specified graph states, continually expanding them with additional elementary fields until the required surface area for the external dump - defined by the lowest value of the occupied site - is achieved. To do this, it is necessary to accurately identify the factors affecting the choice of dump location. On such a basis, it is then possible to specify the target function, which reflects the individual costs of dump construction on a given site. This is discussed further in chapter 3. The area of potential dump location has been divided into elementary fields, each represented by a corresponding geometrical locus. Ascribed to this locus, in addition to its geodesic coordinates, are the appropriate attributes reflecting the degree of development of its elementary field. These tasks can be carried out automatically thanks to the integration of the method with the system of geospatial data management for the given area. The collection of loci, together with geodesic coordinates, constitutes the points on the graph used during exploration. This is done using the A-star algorithm, which uses a heuristic function, allowing it to identify the optimal solution; therefore, the collection of elementary fields, which occupy the potential construction area of a dump, characterized by the lowest value representing the cost of occupation and dumping of overburden in the area. The precision of the boundary, generated by the algorithm, is dependent on the established size of the elementary field, and should be refined each time by the designer of the surface mine. This article presents the application of the above method of dump location using the example of "Tomisławice," a lignite surface mine owned by PAK KWB Konin S. A. The method made it possible to identify the most favorable dump location on the northeast side of the initial pit, within 2 kilometers of its surrounding area (discussed further in chapter 3). This method is universal in nature and, after certain modifications, can be implemented for other surface mines as well.
Fault diagnosis of helical gearbox using acoustic signal and wavelets
NASA Astrophysics Data System (ADS)
Pranesh, SK; Abraham, Siju; Sugumaran, V.; Amarnath, M.
2017-05-01
The efficient transmission of power in machines is needed and gears are an appropriate choice. Faults in gears result in loss of energy and money. The monitoring and fault diagnosis are done by analysis of the acoustic and vibrational signals which are generally considered to be unwanted by products. This study proposes the usage of machine learning algorithm for condition monitoring of a helical gearbox by using the sound signals produced by the gearbox. Artificial faults were created and subsequently signals were captured by a microphone. An extensive study using different wavelet transformations for feature extraction from the acoustic signals was done, followed by waveletselection and feature selection using J48 decision tree and feature classification was performed using K star algorithm. Classification accuracy of 100% was obtained in the study
Automatic tissue characterization from ultrasound imagery
NASA Astrophysics Data System (ADS)
Kadah, Yasser M.; Farag, Aly A.; Youssef, Abou-Bakr M.; Badawi, Ahmed M.
1993-08-01
In this work, feature extraction algorithms are proposed to extract the tissue characterization parameters from liver images. Then the resulting parameter set is further processed to obtain the minimum number of parameters representing the most discriminating pattern space for classification. This preprocessing step was applied to over 120 pathology-investigated cases to obtain the learning data for designing the classifier. The extracted features are divided into independent training and test sets and are used to construct both statistical and neural classifiers. The optimal criteria for these classifiers are set to have minimum error, ease of implementation and learning, and the flexibility for future modifications. Various algorithms for implementing various classification techniques are presented and tested on the data. The best performance was obtained using a single layer tensor model functional link network. Also, the voting k-nearest neighbor classifier provided comparably good diagnostic rates.
Chemodynamical Clustering Applied to APOGEE Data: Rediscovering Globular Clusters
NASA Astrophysics Data System (ADS)
Chen, Boquan; D’Onghia, Elena; Pardy, Stephen A.; Pasquali, Anna; Bertelli Motta, Clio; Hanlon, Bret; Grebel, Eva K.
2018-06-01
We have developed a novel technique based on a clustering algorithm that searches for kinematically and chemically clustered stars in the APOGEE DR12 Cannon data. As compared to classical chemical tagging, the kinematic information included in our methodology allows us to identify stars that are members of known globular clusters with greater confidence. We apply our algorithm to the entire APOGEE catalog of 150,615 stars whose chemical abundances are derived by the Cannon. Our methodology found anticorrelations between the elements Al and Mg, Na and O, and C and N previously identified in the optical spectra in globular clusters, even though we omit these elements in our algorithm. Our algorithm identifies globular clusters without a priori knowledge of their locations in the sky. Thus, not only does this technique promise to discover new globular clusters, but it also allows us to identify candidate streams of kinematically and chemically clustered stars in the Milky Way.
Improved imaging algorithm for bridge crack detection
NASA Astrophysics Data System (ADS)
Lu, Jingxiao; Song, Pingli; Han, Kaihong
2012-04-01
This paper present an improved imaging algorithm for bridge crack detection, through optimizing the eight-direction Sobel edge detection operator, making the positioning of edge points more accurate than without the optimization, and effectively reducing the false edges information, so as to facilitate follow-up treatment. In calculating the crack geometry characteristics, we use the method of extracting skeleton on single crack length. In order to calculate crack area, we construct the template of area by making logical bitwise AND operation of the crack image. After experiment, the results show errors of the crack detection method and actual manual measurement are within an acceptable range, meet the needs of engineering applications. This algorithm is high-speed and effective for automated crack measurement, it can provide more valid data for proper planning and appropriate performance of the maintenance and rehabilitation processes of bridge.
Group sparse multiview patch alignment framework with view consistency for image classification.
Gui, Jie; Tao, Dacheng; Sun, Zhenan; Luo, Yong; You, Xinge; Tang, Yuan Yan
2014-07-01
No single feature can satisfactorily characterize the semantic concepts of an image. Multiview learning aims to unify different kinds of features to produce a consensual and efficient representation. This paper redefines part optimization in the patch alignment framework (PAF) and develops a group sparse multiview patch alignment framework (GSM-PAF). The new part optimization considers not only the complementary properties of different views, but also view consistency. In particular, view consistency models the correlations between all possible combinations of any two kinds of view. In contrast to conventional dimensionality reduction algorithms that perform feature extraction and feature selection independently, GSM-PAF enjoys joint feature extraction and feature selection by exploiting l(2,1)-norm on the projection matrix to achieve row sparsity, which leads to the simultaneous selection of relevant features and learning transformation, and thus makes the algorithm more discriminative. Experiments on two real-world image data sets demonstrate the effectiveness of GSM-PAF for image classification.
Cai, Tianxi; Karlson, Elizabeth W.
2013-01-01
Objectives To test whether data extracted from full text patient visit notes from an electronic medical record (EMR) would improve the classification of PsA compared to an algorithm based on codified data. Methods From the > 1,350,000 adults in a large academic EMR, all 2318 patients with a billing code for PsA were extracted and 550 were randomly selected for chart review and algorithm training. Using codified data and phrases extracted from narrative data using natural language processing, 31 predictors were extracted and three random forest algorithms trained using coded, narrative, and combined predictors. The receiver operator curve (ROC) was used to identify the optimal algorithm and a cut point was chosen to achieve the maximum sensitivity possible at a 90% positive predictive value (PPV). The algorithm was then used to classify the remaining 1768 charts and finally validated in a random sample of 300 cases predicted to have PsA. Results The PPV of a single PsA code was 57% (95%CI 55%–58%). Using a combination of coded data and NLP the random forest algorithm reached a PPV of 90% (95%CI 86%–93%) at sensitivity of 87% (95% CI 83% – 91%) in the training data. The PPV was 93% (95%CI 89%–96%) in the validation set. Adding NLP predictors to codified data increased the area under the ROC (p < 0.001). Conclusions Using NLP with text notes from electronic medical records improved the performance of the prediction algorithm significantly. Random forests were a useful tool to accurately classify psoriatic arthritis cases to enable epidemiological research. PMID:20701955
ECG based Myocardial Infarction detection using Hybrid Firefly Algorithm.
Kora, Padmavathi
2017-12-01
Myocardial Infarction (MI) is one of the most frequent diseases, and can also cause demise, disability and monetary loss in patients who suffer from cardiovascular disorder. Diagnostic methods of this ailment by physicians are typically invasive, even though they do not fulfill the required detection accuracy. Recent feature extraction methods, for example, Auto Regressive (AR) modelling; Magnitude Squared Coherence (MSC); Wavelet Coherence (WTC) using Physionet database, yielded a collection of huge feature set. A large number of these features may be inconsequential containing some excess and non-discriminative components that present excess burden in computation and loss of execution performance. So Hybrid Firefly and Particle Swarm Optimization (FFPSO) is directly used to optimise the raw ECG signal instead of extracting features using the above feature extraction techniques. Provided results in this paper show that, for the detection of MI class, the FFPSO algorithm with ANN gives 99.3% accuracy, sensitivity of 99.97%, and specificity of 98.7% on MIT-BIH database by including NSR database also. The proposed approach has shown that methods that are based on the feature optimization of the ECG signals are the perfect to diagnosis the condition of the heart patients. Copyright © 2017 Elsevier B.V. All rights reserved.
Zhang, Chengjiang; Zhang, Zhuomin; Li, Gongke
2014-06-13
In this study, a novel sulfonated graphene/polypyrrole (SG/PPy) solid-phase microextraction (SPME) coating was prepared and fabricated on a stainless-steel wire by a one-step in situ electrochemical polymerization method. Crucial preparation conditions were optimized as polymerization time of 15min and SG doping amount of 1.5mg/mL. SG/PPy coating showed excellent thermal stability and mechanical durability with a long lifespan of more than 200 stable replicate extractions. SG/PPy coating demonstrated higher extraction selectivity and capacity to volatile terpenes than commonly-used commercial coatings. Finally, SG/PPy coating was practically applied for the analysis of volatile components from star anise and fennel samples. The majority of volatile components identified were terpenes, which suggested the ultra-high extraction selectivity of SG/PPy coating to terpenes during real analytical projects. Four typical volatile terpenes were further quantified to be 0.2-27.4μg/g from star anise samples with good recoveries of 76.4-97.8% and 0.1-1.6μg/g from fennel samples with good recoveries of 80.0-93.1%, respectively. Copyright © 2014 Elsevier B.V. All rights reserved.
Development of Solvent Extraction Approach to Recycle Enriched Molybdenum Material
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tkac, Peter; Brown, M. Alex; Sen, Sujat
2016-06-01
Argonne National Laboratory, in cooperation with Oak Ridge National Laboratory and NorthStar Medical Technologies, LLC, is developing a recycling process for a solution containing valuable Mo-100 or Mo-98 enriched material. Previously, Argonne had developed a recycle process using a precipitation technique. However, this process is labor intensive and can lead to production of large volumes of highly corrosive waste. This report discusses an alternative process to recover enriched Mo in the form of ammonium heptamolybdate by using solvent extraction. Small-scale experiments determined the optimal conditions for effective extraction of high Mo concentrations. Methods were developed for removal of ammonium chloridemore » from the molybdenum product of the solvent extraction process. In large-scale experiments, very good purification from potassium and other elements was observed with very high recovery yields (~98%).« less
NASA Astrophysics Data System (ADS)
Duarte, Manuel; Mamon, Gary A.
2014-05-01
The specific star formation rates of galaxies are influenced both by their mass and by their environment. Moreover, the mass function of groups and clusters serves as a powerful cosmological tool. It is thus important to quantify the accuracy to which group properties are extracted from redshift surveys. We test here the Friends-of-Friends (FoF) grouping algorithm, which depends on two linking lengths (LLs), plane-of-sky and line-of-sight (LOS), normalized to the mean nearest neighbour separation of field galaxies. We argue, on theoretical grounds, that LLs should be b⊥ ≃ 0.11, and b∥ ≈ 1.3 to recover 95 per cent of all galaxies with projected radii within the virial radius r200 and 95 per cent of the galaxies along the LOS. We then predict that 80 to 90 per cent of the galaxies in FoF groups should lie within their parent real-space groups (RSGs), defined within their virial spheres. We test the FoF extraction for 16 × 16 pairs of LLs, using subsamples of galaxies, doubly complete in distance and luminosity, of a flux-limited mock Sloan Digital Sky Survey (SDSS) galaxy catalogue. We find that massive RSGs are more prone to fragmentation, while the fragments typically have low estimated mass, with typically 30 per cent of groups of low and intermediate estimated mass being fragments. Group merging rises drastically with estimated mass. For groups of three or more galaxies, galaxy completeness and reliability are both typically better than 80 per cent (after discarding the fragments). Estimated masses of extracted groups are biased low, by up to a factor 4 at low richness, while the inefficiency of mass estimation improves from 0.85 dex to 0.2 dex when moving from low to high multiplicity groups. The optimal LLs depend on the scientific goal for the group catalogue. We propose b⊥ ≃ 0.07, with b∥ ≃ 1.1 for studies of environmental effects, b∥ ≃ 2.5 for cosmographic studies and b∥ ≃ 5 for followups of individual groups.
A novel approach for dimension reduction of microarray.
Aziz, Rabia; Verma, C K; Srivastava, Namita
2017-12-01
This paper proposes a new hybrid search technique for feature (gene) selection (FS) using Independent component analysis (ICA) and Artificial Bee Colony (ABC) called ICA+ABC, to select informative genes based on a Naïve Bayes (NB) algorithm. An important trait of this technique is the optimization of ICA feature vector using ABC. ICA+ABC is a hybrid search algorithm that combines the benefits of extraction approach, to reduce the size of data and wrapper approach, to optimize the reduced feature vectors. This hybrid search technique is facilitated by evaluating the performance of ICA+ABC on six standard gene expression datasets of classification. Extensive experiments were conducted to compare the performance of ICA+ABC with the results obtained from recently published Minimum Redundancy Maximum Relevance (mRMR) +ABC algorithm for NB classifier. Also to check the performance that how ICA+ABC works as feature selection with NB classifier, compared the combination of ICA with popular filter techniques and with other similar bio inspired algorithm such as Genetic Algorithm (GA) and Particle Swarm Optimization (PSO). The result shows that ICA+ABC has a significant ability to generate small subsets of genes from the ICA feature vector, that significantly improve the classification accuracy of NB classifier compared to other previously suggested methods. Copyright © 2017 Elsevier Ltd. All rights reserved.
A Convex Formulation for Learning a Shared Predictive Structure from Multiple Tasks
Chen, Jianhui; Tang, Lei; Liu, Jun; Ye, Jieping
2013-01-01
In this paper, we consider the problem of learning from multiple related tasks for improved generalization performance by extracting their shared structures. The alternating structure optimization (ASO) algorithm, which couples all tasks using a shared feature representation, has been successfully applied in various multitask learning problems. However, ASO is nonconvex and the alternating algorithm only finds a local solution. We first present an improved ASO formulation (iASO) for multitask learning based on a new regularizer. We then convert iASO, a nonconvex formulation, into a relaxed convex one (rASO). Interestingly, our theoretical analysis reveals that rASO finds a globally optimal solution to its nonconvex counterpart iASO under certain conditions. rASO can be equivalently reformulated as a semidefinite program (SDP), which is, however, not scalable to large datasets. We propose to employ the block coordinate descent (BCD) method and the accelerated projected gradient (APG) algorithm separately to find the globally optimal solution to rASO; we also develop efficient algorithms for solving the key subproblems involved in BCD and APG. The experiments on the Yahoo webpages datasets and the Drosophila gene expression pattern images datasets demonstrate the effectiveness and efficiency of the proposed algorithms and confirm our theoretical analysis. PMID:23520249
A Comprehensive Stellar Astrophysical Study of the Old Open Cluster M67 with Kepler
NASA Astrophysics Data System (ADS)
Mathieu, Robert D.; Vanderburg, Andrew; K2 M67 Team
2016-06-01
M67 is among the best studied of all star clusters. Being at an age and metallicity very near solar, at an accessible distance of 850 pc with low reddening, and rich in content (over 1000 members including main-sequence dwarfs, a well populated subgiant branch and red giant branch, white dwarfs, blue stragglers, sub-subgiants, X-ray sources and CVs), M67 is a cornerstone of stellar astrophysics.The K2 mission (Campaign 5) has obtained long-cadence observations for 2373 stars, both within an optimized central superaperture and as specified targets outside the superaperture. 1,432 of these stars are likely cluster members based on kinematic and photometric criteria.We have extracted light curves and corrected for K2 roll systematics, producing light curves with noise characteristics qualitatively similar to Kepler light curves of stars of similar magnitudes. The data quality is slightly poorer than for field stars observed by K2 due to crowding near the cluster core, but the data are of sufficient quality to detect seismic oscillations, binary star eclipses, flares, and candidate transit events. We are in the process of uploading light curves and various diagnostic files to MAST; light curves and supporting data will also be made available on ExoFOP.Importantly, several investigators within the M67 K2 team are independently doing light curve extractions and analyses for confirmation of science results. We also are adding extensive ground-based supporting data, including APOGEE near-infrared spectra, TRES and WIYN optical spectra, LCOGT photometry, and more.Our science goals encompass asteroseismology and stellar evolution, alternative stellar evolution pathways in binary stars, stellar rotation and angular momentum evolution, stellar activity, eclipsing binaries and beaming, and exoplanets. We will present early science results as available by the time of the meeting, and certainly including asteroseismology, blue stragglers and sub-subgiants, and newly discovered eclipsing binaries.This work is supported by NASA grant NNX15AW24A to the University of Wisconsin - Madison.
Optimization of COS/FUV Spectrum Placement at Lifetime Position 4
NASA Astrophysics Data System (ADS)
De Rosa, Gisella
2017-08-01
We give a summary of the rationale, structure and preliminary analysis of the Lifetime Position 4 (LP4) special calibration program 14841, aimed at determining the optimal placement of the spectra at LP4. The program obtained deep (S/N = 60 per resel) exposures of the standard star WD0308-565 with G130M/1291 and G130M/1222 settings at -2.52" below LP3 in the cross dispersion direction. These particular settings were chosen because they have the widest footprints on the detectors. Science spectra were successfully extracted at this position without any contamination due to gain-sag at LP3.
CCD centroiding experiment for JASMINE and ILOM
NASA Astrophysics Data System (ADS)
Yano, Taihei; Araki, Hiroshi; Gouda, Naoteru; Kobayashi, Yukiyasu; Tsujimoto, Takuji; Nakajima, Tadashi; Kawano, Nobuyuki; Tazawa, Seiichi; Yamada, Yoshiyuki; Hanada, Hideo; Asari, Kazuyoshi; Tsuruta, Seiitsu
2006-06-01
JASMINE and ILOM are space missions which are in progress at the National Astronomical Observatory of Japan. These two projects need a common astrometric technique to obtain precise positions of star images on solid state detectors to accomplish the objectives. We have carried out measurements of centroid of artificial star images on a CCD to investigate the accuracy of the positions of the stars, using an algorithm for estimating them from photon weighted means of the stars. We find that the accuracy of the star positions reaches 1/300 pixel for one measurement. We also measure positions of stars, using an algorithm for correcting the distorted optical image. Finally, we find that the accuracy of the measurement for the positions of the stars from the strongly distorted image is under 1/150 pixel for one measurement.
NASA Astrophysics Data System (ADS)
Xu, Lili; Luo, Shuqian
2010-11-01
Microaneurysms (MAs) are the first manifestations of the diabetic retinopathy (DR) as well as an indicator for its progression. Their automatic detection plays a key role for both mass screening and monitoring and is therefore in the core of any system for computer-assisted diagnosis of DR. The algorithm basically comprises the following stages: candidate detection aiming at extracting the patterns possibly corresponding to MAs based on mathematical morphological black top hat, feature extraction to characterize these candidates, and classification based on support vector machine (SVM), to validate MAs. Feature vector and kernel function of SVM selection is very important to the algorithm. We use the receiver operating characteristic (ROC) curve to evaluate the distinguishing performance of different feature vectors and different kernel functions of SVM. The ROC analysis indicates the quadratic polynomial SVM with a combination of features as the input shows the best discriminating performance.
Xu, Lili; Luo, Shuqian
2010-01-01
Microaneurysms (MAs) are the first manifestations of the diabetic retinopathy (DR) as well as an indicator for its progression. Their automatic detection plays a key role for both mass screening and monitoring and is therefore in the core of any system for computer-assisted diagnosis of DR. The algorithm basically comprises the following stages: candidate detection aiming at extracting the patterns possibly corresponding to MAs based on mathematical morphological black top hat, feature extraction to characterize these candidates, and classification based on support vector machine (SVM), to validate MAs. Feature vector and kernel function of SVM selection is very important to the algorithm. We use the receiver operating characteristic (ROC) curve to evaluate the distinguishing performance of different feature vectors and different kernel functions of SVM. The ROC analysis indicates the quadratic polynomial SVM with a combination of features as the input shows the best discriminating performance.
Kong, Jianlei; Ding, Xiaokang; Liu, Jinhao; Yan, Lei; Wang, Jianli
2015-01-01
In this paper, a new algorithm to improve the accuracy of estimating diameter at breast height (DBH) for tree trunks in forest areas is proposed. First, the information is collected by a two-dimensional terrestrial laser scanner (2DTLS), which emits laser pulses to generate a point cloud. After extraction and filtration, the laser point clusters of the trunks are obtained, which are optimized by an arithmetic means method. Then, an algebraic circle fitting algorithm in polar form is non-linearly optimized by the Levenberg-Marquardt method to form a new hybrid algorithm, which is used to acquire the diameters and positions of the trees. Compared with previous works, this proposed method improves the accuracy of diameter estimation of trees significantly and effectively reduces the calculation time. Moreover, the experimental results indicate that this method is stable and suitable for the most challenging conditions, which has practical significance in improving the operating efficiency of forest harvester and reducing the risk of causing accidents. PMID:26147726
Wang, Hong-Hua
2014-01-01
A precise mathematical model plays a pivotal role in the simulation, evaluation, and optimization of photovoltaic (PV) power systems. Different from the traditional linear model, the model of PV module has the features of nonlinearity and multiparameters. Since conventional methods are incapable of identifying the parameters of PV module, an excellent optimization algorithm is required. Artificial fish swarm algorithm (AFSA), originally inspired by the simulation of collective behavior of real fish swarms, is proposed to fast and accurately extract the parameters of PV module. In addition to the regular operation, a mutation operator (MO) is designed to enhance the searching performance of the algorithm. The feasibility of the proposed method is demonstrated by various parameters of PV module under different environmental conditions, and the testing results are compared with other studied methods in terms of final solutions and computational time. The simulation results show that the proposed method is capable of obtaining higher parameters identification precision. PMID:25243233
STAR adaptation of QR algorithm. [program for solving over-determined systems of linear equations
NASA Technical Reports Server (NTRS)
Shah, S. N.
1981-01-01
The QR algorithm used on a serial computer and executed on the Control Data Corporation 6000 Computer was adapted to execute efficiently on the Control Data STAR-100 computer. How the scalar program was adapted for the STAR-100 and why these adaptations yielded an efficient STAR program is described. Program listings of the old scalar version and the vectorized SL/1 version are presented in the appendices. Execution times for the two versions applied to the same system of linear equations, are compared.
Tensor Rank Preserving Discriminant Analysis for Facial Recognition.
Tao, Dapeng; Guo, Yanan; Li, Yaotang; Gao, Xinbo
2017-10-12
Facial recognition, one of the basic topics in computer vision and pattern recognition, has received substantial attention in recent years. However, for those traditional facial recognition algorithms, the facial images are reshaped to a long vector, thereby losing part of the original spatial constraints of each pixel. In this paper, a new tensor-based feature extraction algorithm termed tensor rank preserving discriminant analysis (TRPDA) for facial image recognition is proposed; the proposed method involves two stages: in the first stage, the low-dimensional tensor subspace of the original input tensor samples was obtained; in the second stage, discriminative locality alignment was utilized to obtain the ultimate vector feature representation for subsequent facial recognition. On the one hand, the proposed TRPDA algorithm fully utilizes the natural structure of the input samples, and it applies an optimization criterion that can directly handle the tensor spectral analysis problem, thereby decreasing the computation cost compared those traditional tensor-based feature selection algorithms. On the other hand, the proposed TRPDA algorithm extracts feature by finding a tensor subspace that preserves most of the rank order information of the intra-class input samples. Experiments on the three facial databases are performed here to determine the effectiveness of the proposed TRPDA algorithm.
Baohua, Li; Wenjie, Lai; Yun, Chen; Zongming, Liu
2013-01-01
An autonomous navigation algorithm using the sensor that integrated the star sensor (FOV1) and ultraviolet earth sensor (FOV2) is presented. The star images are sampled by FOV1, and the ultraviolet earth images are sampled by the FOV2. The star identification algorithm and star tracking algorithm are executed at FOV1. Then, the optical axis direction of FOV1 at J2000.0 coordinate system is calculated. The ultraviolet image of earth is sampled by FOV2. The center vector of earth at FOV2 coordinate system is calculated with the coordinates of ultraviolet earth. The autonomous navigation data of satellite are calculated by integrated sensor with the optical axis direction of FOV1 and the center vector of earth from FOV2. The position accuracy of the autonomous navigation for satellite is improved from 1000 meters to 300 meters. And the velocity accuracy of the autonomous navigation for satellite is improved from 100 m/s to 20 m/s. At the same time, the period sine errors of the autonomous navigation for satellite are eliminated. The autonomous navigation for satellite with a sensor that integrated ultraviolet earth sensor and star sensor is well robust. PMID:24250261
Baohua, Li; Wenjie, Lai; Yun, Chen; Zongming, Liu
2013-01-01
An autonomous navigation algorithm using the sensor that integrated the star sensor (FOV1) and ultraviolet earth sensor (FOV2) is presented. The star images are sampled by FOV1, and the ultraviolet earth images are sampled by the FOV2. The star identification algorithm and star tracking algorithm are executed at FOV1. Then, the optical axis direction of FOV1 at J2000.0 coordinate system is calculated. The ultraviolet image of earth is sampled by FOV2. The center vector of earth at FOV2 coordinate system is calculated with the coordinates of ultraviolet earth. The autonomous navigation data of satellite are calculated by integrated sensor with the optical axis direction of FOV1 and the center vector of earth from FOV2. The position accuracy of the autonomous navigation for satellite is improved from 1000 meters to 300 meters. And the velocity accuracy of the autonomous navigation for satellite is improved from 100 m/s to 20 m/s. At the same time, the period sine errors of the autonomous navigation for satellite are eliminated. The autonomous navigation for satellite with a sensor that integrated ultraviolet earth sensor and star sensor is well robust.
Toward faster and more accurate star sensors using recursive centroiding and star identification
NASA Astrophysics Data System (ADS)
Samaan, Malak Anees
The objective of this research is to study different novel developed techniques for spacecraft attitude determination methods using star tracker sensors. This dissertation addresses various issues on developing improved star tracker software, presents new approaches for better performance of star trackers, and considers applications to realize high precision attitude estimates. Star-sensors are often included in a spacecraft attitude-system instrument suite, where high accuracy pointing capability is required. Novel methods for image processing, camera parameters ground calibration, autonomous star pattern recognition, and recursive star identification are researched and implemented to achieve high accuracy and a high frame rate star tracker that can be used for many space missions. This dissertation presents the methods and algorithms implemented for the one Field of View 'FOV'Star NavI sensor that was tested aboard the STS-107 mission in spring 2003 and the two fields of view StarNavII sensor for the EO-3 spacecraft scheduled for launch in 2007. The results of this research enable advances in spacecraft attitude determination based upon real time star sensing and pattern recognition. Building upon recent developments in image processing, pattern recognition algorithms, focal plane detectors, electro-optics, and microprocessors, the star tracker concept utilized in this research has the following key objectives for spacecraft of the future: lower cost, lower mass and smaller volume, increased robustness to environment-induced aging and instrument response variations, increased adaptability and autonomy via recursive self-calibration and health-monitoring on-orbit. Many of these attributes are consequences of improved algorithms that are derived in this dissertation.
Optimal Target Stars in the Search for Life
NASA Astrophysics Data System (ADS)
Lingam, Manasvi; Loeb, Abraham
2018-04-01
The selection of optimal targets in the search for life represents a highly important strategic issue. In this Letter, we evaluate the benefits of searching for life around a potentially habitable planet orbiting a star of arbitrary mass relative to a similar planet around a Sun-like star. If recent physical arguments implying that the habitability of planets orbiting low-mass stars is selectively suppressed are correct, we find that planets around solar-type stars may represent the optimal targets.
NASA Astrophysics Data System (ADS)
Fustes Villadóniga, Diego
2014-02-01
In the so-called IT era, the capabilities of data acquisition systems have increased to such an extent that it has become difficult to store all the information they produce, and analyse it. This explosion of data has recently appeared in the field of Astronomy, where an increasing number of objects are being observed on a regular basis. An example of this is the upcoming Gaia mission, which will pick up multiple properties of a billion stars, whose information will have a volume of approximately a petabyte. The analysis of a similar amount of information inevitably requires the development of new data analysis methods to extract all the knowledge it contains. This thesis is devoted to the development of data analysis methods to be integrated in the Gaia pipeline, such that knowledge can be extracted from the data collected by the mission. In order to analyze the data from the Gaia mission, the European Space Agency organized the Data Processing and Analysis Consortium (DPAC) which is composed of hundreds of scientists and engineers. DPAC is divided into eight Coordination Units (CUs). This thesis is dedicated to algorithm development in CU8, which is responsible for source classification and astrophysical parameters (AP) estimation. Methods based on Artificial Neural Networks (ANNs) are developed to perform the tasks related to two different work packages in CU8: the GSP-Spec package (GWP-823), and the OA package (GWP-836). The GSP-Spec package is responsible for estimating stellar APs by means of the Radial Velocity Spectrograph (RVS) spectrum. This work presents the development of one of the GSP-Spec modules, which is based on the application of feed-forward ANNs. A methodology is described, based on the optimization of genetic algorithms and aimed at obtaining an optimal set of configuration parameters for the ANN in each case, depending on the signal to noise ratio (SNR) in the RVS spectrum and on the type of star to parameterize. Furthermore, in order to improve the AP estimates, wavelet signal processing techniques, applied to the RVS spectrum, are studied. Despite the effectiveness shown by ANNs in estimating APs, in principle they lack the ability to provide an uncertainty value on these estimates, making it impossible to determine their reliability. Because of this, a new architecture for the ANN is presented in which the inputs and outputs are reversed, so that the ANN estimates the RVS spectrum from the APs. Such an architecture is called Generative ANN (GANN) and is applied to the AP estimation of a set of simulated RVS spectra for the Gaia mission, where it is more effective than the conventional ANN model, in the case of faint stars with low SNR. Finally, the GANN can be applied for obtaining the posterior probability of each of the APs according to the RVS spectrum, allowing for their more complete analysis. Given the nature of the Gaia mission, which is the first astronomical mission that will observe, in an unbiased way, the entire sky up to magnitude 20, a large number of outliers are expected. The OA package in CU8 handles the processing of this type of objects, which are defined as those that could not be reliably classified by the methods in the upstream classification packages. OA methods are based on the unsupervised learning of all outliers. Such learning has two parts: clustering and dimensionality reduction. The Self-Organizing Map (SOM) algorithm is selected as a basis for this learning. Its effectiveness is demonstrated when it is applied, with an optimal configuration, to the Gaia simulations. Furthermore, the algorithm is applied to real outliers from the SDSS catalog. Since a subsequent identification of the clusters obtained by the SOM is necessary, two different methods of identification are applied. The first method is based on the similarity between the SOM prototypes and the Gaia simulations, and the second method is based on the recovery of stored classifications in the SIMBAD catalog by cross-matching celestial coordinates. Thanks to the visualization of the SOM planes, and to both methods of identification, it is possible to distinguish between valid observations and observational artifacts. Furthermore, the method allows for the selection of objects of interest for follow-up observations, in order to determine their nature.
Glacier Frontal Line Extraction from SENTINEL-1 SAR Imagery in Prydz Area
NASA Astrophysics Data System (ADS)
Li, F.; Wang, Z.; Zhang, S.; Zhang, Y.
2018-04-01
Synthetic Aperture Radar (SAR) can provide all-day and all-night observation of the earth in all-weather conditions with high resolution, and it is widely used in polar research including sea ice, sea shelf, as well as the glaciers. For glaciers monitoring, the frontal position of a calving glacier at different moments of time is of great importance, which indicates the estimation of the calving rate and flux of the glaciers. In this abstract, an automatic algorithm for glacier frontal extraction using time series Sentinel-1 SAR imagery is proposed. The technique transforms the amplitude imagery of Sentinel-1 SAR into a binary map using SO-CFAR method, and then frontal points are extracted using profile method which reduces the 2D binary map to 1D binary profiles, the final frontal position of a calving glacier is the optimal profile selected from the different average segmented profiles. The experiment proves that the detection algorithm for SAR data can automatically extract the frontal position of glacier with high efficiency.
Kim, Heejun; Bian, Jiantao; Mostafa, Javed; Jonnalagadda, Siddhartha; Del Fiol, Guilherme
2016-01-01
Motivation: Clinicians need up-to-date evidence from high quality clinical trials to support clinical decisions. However, applying evidence from the primary literature requires significant effort. Objective: To examine the feasibility of automatically extracting key clinical trial information from ClinicalTrials.gov. Methods: We assessed the coverage of ClinicalTrials.gov for high quality clinical studies that are indexed in PubMed. Using 140 random ClinicalTrials.gov records, we developed and tested rules for the automatic extraction of key information. Results: The rate of high quality clinical trial registration in ClinicalTrials.gov increased from 0.2% in 2005 to 17% in 2015. Trials reporting results increased from 3% in 2005 to 19% in 2015. The accuracy of the automatic extraction algorithm for 10 trial attributes was 90% on average. Future research is needed to improve the algorithm accuracy and to design information displays to optimally present trial information to clinicians. PMID:28269867
NASA Astrophysics Data System (ADS)
Baraldi, P.; Bonfanti, G.; Zio, E.
2018-03-01
The identification of the current degradation state of an industrial component and the prediction of its future evolution is a fundamental step for the development of condition-based and predictive maintenance approaches. The objective of the present work is to propose a general method for extracting a health indicator to measure the amount of component degradation from a set of signals measured during operation. The proposed method is based on the combined use of feature extraction techniques, such as Empirical Mode Decomposition and Auto-Associative Kernel Regression, and a multi-objective Binary Differential Evolution (BDE) algorithm for selecting the subset of features optimal for the definition of the health indicator. The objectives of the optimization are desired characteristics of the health indicator, such as monotonicity, trendability and prognosability. A case study is considered, concerning the prediction of the remaining useful life of turbofan engines. The obtained results confirm that the method is capable of extracting health indicators suitable for accurate prognostics.
Lee, Jinseok; Chon, Ki H
2010-09-01
We present particle filtering (PF) algorithms for an accurate respiratory rate extraction from pulse oximeter recordings over a broad range: 12-90 breaths/min. These methods are based on an autoregressive (AR) model, where the aim is to find the pole angle with the highest magnitude as it corresponds to the respiratory rate. However, when SNR is low, the pole angle with the highest magnitude may not always lead to accurate estimation of the respiratory rate. To circumvent this limitation, we propose a probabilistic approach, using a sequential Monte Carlo method, named PF, which is combined with the optimal parameter search (OPS) criterion for an accurate AR model-based respiratory rate extraction. The PF technique has been widely adopted in many tracking applications, especially for nonlinear and/or non-Gaussian problems. We examine the performances of five different likelihood functions of the PF algorithm: the strongest neighbor, nearest neighbor (NN), weighted nearest neighbor (WNN), probability data association (PDA), and weighted probability data association (WPDA). The performance of these five combined OPS-PF algorithms was measured against a solely OPS-based AR algorithm for respiratory rate extraction from pulse oximeter recordings. The pulse oximeter data were collected from 33 healthy subjects with breathing rates ranging from 12 to 90 breaths/ min. It was found that significant improvement in accuracy can be achieved by employing particle filters, and that the combined OPS-PF employing either the NN or WNN likelihood function achieved the best results for all respiratory rates considered in this paper. The main advantage of the combined OPS-PF with either the NN or WNN likelihood function is that for the first time, respiratory rates as high as 90 breaths/min can be accurately extracted from pulse oximeter recordings.
NASA Astrophysics Data System (ADS)
Brakensiek, Joshua; Ragozzine, D.
2012-10-01
The transit method for discovering extra-solar planets relies on detecting regular diminutions of light from stars due to the shadows of planets passing in between the star and the observer. NASA's Kepler Mission has successfully discovered thousands of exoplanet candidates using this technique, including hundreds of stars with multiple transiting planets. In order to estimate the frequency of these valuable systems, our research concerns the efficient calculation of geometric probabilities for detecting multiple transiting extrasolar planets around the same parent star. In order to improve on previous studies that used numerical methods (e.g., Ragozzine & Holman 2010, Tremaine & Dong 2011), we have constructed an efficient, analytical algorithm which, given a collection of conjectured exoplanets orbiting a star, computes the probability that any particular group of exoplanets are transiting. The algorithm applies theorems of elementary differential geometry to compute the areas bounded by circular curves on the surface of a sphere (see Ragozzine & Holman 2010). The implemented algorithm is more accurate and orders of magnitude faster than previous algorithms, based on comparison with Monte Carlo simulations. Expanding this work, we have also developed semi-analytical methods for determining the frequency of exoplanet mutual events, i.e., the geometric probability two planets will transit each other (Planet-Planet Occultation) and the probability that this transit occurs simultaneously as they transit their star (Overlapping Double Transits; see Ragozzine & Holman 2010). The latter algorithm can also be applied to calculating the probability of observing transiting circumbinary planets (Doyle et al. 2011, Welsh et al. 2012). All of these algorithms have been coded in C and will be made publicly available. We will present and advertise these codes and illustrate their value for studying exoplanetary systems.
Discriminative region extraction and feature selection based on the combination of SURF and saliency
NASA Astrophysics Data System (ADS)
Deng, Li; Wang, Chunhong; Rao, Changhui
2011-08-01
The objective of this paper is to provide a possible optimization on salient region algorithm, which is extensively used in recognizing and learning object categories. Salient region algorithm owns the superiority of intra-class tolerance, global score of features and automatically prominent scale selection under certain range. However, the major limitation behaves on performance, and that is what we attempt to improve. By reducing the number of pixels involved in saliency calculation, it can be accelerated. We use interest points detected by fast-Hessian, the detector of SURF, as the candidate feature for saliency operation, rather than the whole set in image. This implementation is thereby called Saliency based Optimization over SURF (SOSU for short). Experiment shows that bringing in of such a fast detector significantly speeds up the algorithm. Meanwhile, Robustness of intra-class diversity ensures object recognition accuracy.
Spectral gap optimization of order parameters for sampling complex molecular systems
Tiwary, Pratyush; Berne, B. J.
2016-01-01
In modern-day simulations of many-body systems, much of the computational complexity is shifted to the identification of slowly changing molecular order parameters called collective variables (CVs) or reaction coordinates. A vast array of enhanced-sampling methods are based on the identification and biasing of these low-dimensional order parameters, whose fluctuations are important in driving rare events of interest. Here, we describe a new algorithm for finding optimal low-dimensional CVs for use in enhanced-sampling biasing methods like umbrella sampling, metadynamics, and related methods, when limited prior static and dynamic information is known about the system, and a much larger set of candidate CVs is specified. The algorithm involves estimating the best combination of these candidate CVs, as quantified by a maximum path entropy estimate of the spectral gap for dynamics viewed as a function of that CV. The algorithm is called spectral gap optimization of order parameters (SGOOP). Through multiple practical examples, we show how this postprocessing procedure can lead to optimization of CV and several orders of magnitude improvement in the convergence of the free energy calculated through metadynamics, essentially giving the ability to extract useful information even from unsuccessful metadynamics runs. PMID:26929365
Laser guide star wavefront sensing for ground-layer adaptive optics on extremely large telescopes.
Clare, Richard M; Le Louarn, Miska; Béchet, Clementine
2011-02-01
We propose ground-layer adaptive optics (GLAO) to improve the seeing on the 42 m European Extremely Large Telescope. Shack-Hartmann wavefront sensors (WFSs) with laser guide stars (LGSs) will experience significant spot elongation due to off-axis observation. This spot elongation influences the design of the laser launch location, laser power, WFS detector, and centroiding algorithm for LGS GLAO on an extremely large telescope. We show, using end-to-end numerical simulations, that with a noise-weighted matrix-vector-multiply reconstructor, the performance in terms of 50% ensquared energy (EE) of the side and central launch of the lasers is equivalent, the matched filter and weighted center of gravity centroiding algorithms are the most promising, and approximately 10×10 undersampled pixels are optimal. Significant improvement in the 50% EE can be observed with a few tens of photons/subaperture/frame, and no significant gain is seen by adding more than 200 photons/subaperture/frame. The LGS GLAO is not particularly sensitive to the sodium profile present in the mesosphere nor to a short-timescale (less than 100 s) evolution of the sodium profile. The performance of LGS GLAO is, however, sensitive to the atmospheric turbulence profile.
The M 4 Core Project with HST - V. Characterizing the PSFs of WFC3/UVIS by focus★
NASA Astrophysics Data System (ADS)
Anderson, J.; Bedin, L. R.
2017-09-01
As part of the astrometric Hubble Space Telescope (HST) large program GO-12911, we conduct an in-depth study to characterize the point spread function (PSF) of the Uv-VISual channel of the Wide Field Camera 3 (WFC3), as a necessary step to achieve the astrometric goals of the program. We extracted a PSF from each of the 589 deep exposures taken through the F467M filter over the course of a year and find that the vast majority of the PSFs lie along a 1-D locus that stretches continuously from one side of focus, through optimal focus, to the other side of focus. We constructed a focus-diverse set of PSFs and find that with only five medium-bright stars in an exposure it is possible to pin down the focus level of that exposure. We show that the focus-optimized PSF does a considerably better job fitting stars than the average 'library' PSF, especially when the PSF is out of focus. The fluxes and positions are significantly improved over the 'library' PSF treatment. These results are beneficial for a much broader range of scientific applications than simply the program at hand, but the immediate use of these PSFs will enable us to search for astrometric wobble in the bright stars in the core of the globular cluster M 4, which would indicate a dark, high-mass companion, such as a white dwarf, neutron star or black hole.
NASA Astrophysics Data System (ADS)
Zhang, Xin; Liu, Zhiwen; Miao, Qiang; Wang, Lei
2018-07-01
Condition monitoring and fault diagnosis of rolling element bearings are significant to guarantee the reliability and functionality of a mechanical system, production efficiency, and plant safety. However, this is almost invariably a formidable challenge because the fault features are often buried by strong background noises and other unstable interference components. To satisfactorily extract the bearing fault features, a whale optimization algorithm (WOA)-optimized orthogonal matching pursuit (OMP) with a combined time-frequency atom dictionary is proposed in this paper. Firstly, a combined time-frequency atom dictionary whose atom is a combination of Fourier dictionary atom and impact time-frequency dictionary atom is designed according to the properties of bearing fault vibration signal. Furthermore, to improve the efficiency and accuracy of signal sparse representation, the WOA is introduced into the OMP algorithm to optimize the atom parameters for best approximating the original signal with the dictionary atoms. The proposed method is validated through analyzing the bearing fault simulation signal and the real vibration signals collected from an experimental bearing and a wheelset bearing of high-speed trains. The comparisons with the respect to the state of the art in the field are illustrated in detail, which highlight the advantages of the proposed method.
Revision of the Phenomenological Characteristics of the Algol-Type Stars Using the Nav Algorithm
NASA Astrophysics Data System (ADS)
Tkachenko, M. G.; Andronov, I. L.; Chinarova, L. L.
Phenomenological characteristics of the sample of the Algol-type stars are revised using a recently developed NAV ("New Algol Variable") algorithm (2012Ap.....55..536A, 2012arXiv 1212.6707A) and compared to that obtained using common methods of Trigonometric Polynomial Fit (TP) or local Algebraic Polynomial (A) fit of a fixed or (alternately) statistically optimal degree (1994OAP.....7...49A, 2003ASPC..292..391A). The computer program NAV is introduced, which allows to determine the best fit with 7 "linear" and 5 "nonlinear" parameters and their error estimates. The number of parameters is much smaller than for the TP fit (typically 20-40, depending on the width of the eclipse, and is much smaller (5-20) for the W UMa and β Lyrae-type stars. This causes more smooth approximation taking into account the reflection and ellipsoidal effects (TP2) and generally different shapes of the primary and secondary eclipses. An application of the method to two-color CCD photometry to the recently discovered eclipsing variable 2MASS J18024395 + 4003309 = VSX J180243.9 +400331 (2015JASS...32..101A) allowed to make estimates of the physical parameters of the binary system based on the phenomenological parameters of the light curve. The phenomenological parameters of the light curves were determined for the sample of newly discovered EA and EW-type stars (VSX J223429.3+552903, VSX J223421.4+553013, VSX J223416.2+553424, USNO-B1.0 1347-0483658, UCAC3-191-085589, VSX J180755.6+074711= UCAC3 196-166827). Despite we have used original observations published by the discoverers, the accuracy estimates of the period using the NAV method are typically better than the original ones.
Optimal Geometrical Set for Automated Marker Placement to Virtualized Real-Time Facial Emotions
Maruthapillai, Vasanthan; Murugappan, Murugappan
2016-01-01
In recent years, real-time face recognition has been a major topic of interest in developing intelligent human-machine interaction systems. Over the past several decades, researchers have proposed different algorithms for facial expression recognition, but there has been little focus on detection in real-time scenarios. The present work proposes a new algorithmic method of automated marker placement used to classify six facial expressions: happiness, sadness, anger, fear, disgust, and surprise. Emotional facial expressions were captured using a webcam, while the proposed algorithm placed a set of eight virtual markers on each subject’s face. Facial feature extraction methods, including marker distance (distance between each marker to the center of the face) and change in marker distance (change in distance between the original and new marker positions), were used to extract three statistical features (mean, variance, and root mean square) from the real-time video sequence. The initial position of each marker was subjected to the optical flow algorithm for marker tracking with each emotional facial expression. Finally, the extracted statistical features were mapped into corresponding emotional facial expressions using two simple non-linear classifiers, K-nearest neighbor and probabilistic neural network. The results indicate that the proposed automated marker placement algorithm effectively placed eight virtual markers on each subject’s face and gave a maximum mean emotion classification rate of 96.94% using the probabilistic neural network. PMID:26859884
Optimal Geometrical Set for Automated Marker Placement to Virtualized Real-Time Facial Emotions.
Maruthapillai, Vasanthan; Murugappan, Murugappan
2016-01-01
In recent years, real-time face recognition has been a major topic of interest in developing intelligent human-machine interaction systems. Over the past several decades, researchers have proposed different algorithms for facial expression recognition, but there has been little focus on detection in real-time scenarios. The present work proposes a new algorithmic method of automated marker placement used to classify six facial expressions: happiness, sadness, anger, fear, disgust, and surprise. Emotional facial expressions were captured using a webcam, while the proposed algorithm placed a set of eight virtual markers on each subject's face. Facial feature extraction methods, including marker distance (distance between each marker to the center of the face) and change in marker distance (change in distance between the original and new marker positions), were used to extract three statistical features (mean, variance, and root mean square) from the real-time video sequence. The initial position of each marker was subjected to the optical flow algorithm for marker tracking with each emotional facial expression. Finally, the extracted statistical features were mapped into corresponding emotional facial expressions using two simple non-linear classifiers, K-nearest neighbor and probabilistic neural network. The results indicate that the proposed automated marker placement algorithm effectively placed eight virtual markers on each subject's face and gave a maximum mean emotion classification rate of 96.94% using the probabilistic neural network.
Searching for transits in the WTS with the difference imaging light curves
NASA Astrophysics Data System (ADS)
Zendejas Dominguez, Jesus
2013-12-01
The search for exo-planets is currently one of the most exiting and active topics in astronomy. Small and rocky planets are particularly the subject of intense research, since if they are suitably located from their host star, they may be warm and potentially habitable worlds. On the other hand, the discovery of giant planets in short-period orbits provides important constraints on models that describe planet formation and orbital migration theories. Several projects are dedicated to discover and characterize planets outside of our solar system. Among them, the Wide-Field Camera Transit Survey (WTS) is a pioneer program aimed to search for extra-solar planets, that stands out for its particular aims and methodology. The WTS has been in operation since August 2007 with observations from the United Kingdom Infrared Telescope, and represents the first survey that searches for transiting planets in the near-infrared wavelengths; hence the WTS is designed to discover planets around M-dwarfs. The survey was originally assigned about 200 nights, observing four fields that were selected seasonally (RA = 03, 07, 17 and 19h) during a year. The images from the survey are processed by a data reduction pipeline, which uses aperture photometry to construct the light curves. For the most complete field (19h-1145 epochs) in the survey, we produce an alternative set of light curves by using the method of difference imaging, which is a photometric technique that has shown important advantages when used in crowded fields. A quantitative comparison between the photometric precision achieved with both methods is carried out in this work. We remove systematic effects using the sysrem algorithm, scale the error bars on the light curves, and perform a comparison of the corrected light curves. The results show that the aperture photometry light curves provide slightly better precision for objects with J < 16. However, difference photometry light curves present a significant improvement for fainter stars. In order to detect transits in the WTS light curves, we use a modified version of the box-fitting algorithm. The implementation on the detection algorithm performs a trapezoid-fit to the folded light curve. We show that the new fit is able to produce more accurate results than the box-fit model. We describe a set of selection criteria to search for transit candidates that include a parameter calculated by our detection algorithm: the V-shape parameter, which has proven to be useful to automatically identify and remove eclipsing binaries from the survey. The criteria are optimized using Monte-Carlo simulations of artificial transit signals that are injected into the real WTS light curves and subsequently analyzed by our detection algorithm. We separately optimize the selection criteria for two different sets of light curves, one for F-G-K stars, and another for M-dwarfs. In order to search for transiting planet candidates, the optimized selection criteria are applied to the aperture photometry and difference imaging light curves. In this way, the best 200 transit candidates from a sample of ~ 475 000 sources are automatically selected. A visual inspection of the folded light curves of these detections is carried out to eliminate clear false-positives or false-detections. Subsequently, several analysis steps are performed on the 18 best detections, which allow us to classify these objects as transiting planet and eclipsing binary candidates. We report one planet candidate orbiting a late G-type star, which is proposed for photometric follow-up. The independent analysis on the M-dwarf sample provides no planet candidates around these stars. Therefore, the null detection hypothesis and upper limits on the occurrence rate of giant planets around M-dwarfs with J < 17 mag presented in a prior study are confirmed. In this work, we extended the search for transiting planets to stars with J < 18 mag, which enables us to impose a more strict upper limit of 1.1 % on the occurrence rate of short-period giant planets around M-dwarfs, which is significantly lower than other limit published so far. The lack of Hot Jupiters around M-dwarfs play an important role in the existing theories of planet formation and orbital migration of exo-planets around low-mass stars. The dearth of gas-giant planets in short-period orbit detections around M stars indicates that it is not necessary to invoke the disk instability formation mechanism, coupled with an orbital migration process to explain the presence of such planets around low-mass stars. The much reduced efficiency of the core-accretion model to form Jupiters around cool stars seems to be in agreement with the current null result. However, our upper limit value, the lowest reported sofar, is still higher than the detection rates of short-period gas-giant planets around hotter stars. Therefore, we cannot yet reach any firm conclusion about Jovian planet formation models around low-mass and cool main-sequence stars, since there are currently not sufficient observational evidences to support the argument that Hot Jupiters are less common around M-dwarfs than around Sun-like stars. The way to improve this situation is to monitor larger samples of M-stars. For example, an extended analysis of the remaining three WTS fields and currently running M-dwarf transit surveys (like Pan-Planets and PTF/M-dwarfs projects, which are monitoring up to 100 000 objects) may reduce this upper limit. Current and future space missions like Kepler and GAIA could also help to either set stricter upper limits or finally detect Hot Jupiters around low-mass stars. In the last part of this thesis, we present other applications of the difference imaging light curves. We report the detection of five faint extremely-short-period eclipsing binary systems with periods shorter than 0.23 d, as well as two candidates and one confirmed M-dwarf/M-dwarf eclipsing binaries. The etections and results presented in this work demonstrate the benefits of using the difference imaging light curves, especially when going to fainter magnitudes.
NASA Astrophysics Data System (ADS)
Ruffio, Jean-Baptiste; Macintosh, Bruce; Wang, Jason J.; Pueyo, Laurent; Nielsen, Eric L.; De Rosa, Robert J.; Czekala, Ian; Marley, Mark S.; Arriaga, Pauline; Bailey, Vanessa P.; Barman, Travis; Bulger, Joanna; Chilcote, Jeffrey; Cotten, Tara; Doyon, Rene; Duchêne, Gaspard; Fitzgerald, Michael P.; Follette, Katherine B.; Gerard, Benjamin L.; Goodsell, Stephen J.; Graham, James R.; Greenbaum, Alexandra Z.; Hibon, Pascale; Hung, Li-Wei; Ingraham, Patrick; Kalas, Paul; Konopacky, Quinn; Larkin, James E.; Maire, Jérôme; Marchis, Franck; Marois, Christian; Metchev, Stanimir; Millar-Blanchaer, Maxwell A.; Morzinski, Katie M.; Oppenheimer, Rebecca; Palmer, David; Patience, Jennifer; Perrin, Marshall; Poyneer, Lisa; Rajan, Abhijith; Rameau, Julien; Rantakyrö, Fredrik T.; Savransky, Dmitry; Schneider, Adam C.; Sivaramakrishnan, Anand; Song, Inseok; Soummer, Remi; Thomas, Sandrine; Wallace, J. Kent; Ward-Duong, Kimberly; Wiktorowicz, Sloane; Wolff, Schuyler
2017-06-01
We present a new matched-filter algorithm for direct detection of point sources in the immediate vicinity of bright stars. The stellar point-spread function (PSF) is first subtracted using a Karhunen-Loéve image processing (KLIP) algorithm with angular and spectral differential imaging (ADI and SDI). The KLIP-induced distortion of the astrophysical signal is included in the matched-filter template by computing a forward model of the PSF at every position in the image. To optimize the performance of the algorithm, we conduct extensive planet injection and recovery tests and tune the exoplanet spectra template and KLIP reduction aggressiveness to maximize the signal-to-noise ratio (S/N) of the recovered planets. We show that only two spectral templates are necessary to recover any young Jovian exoplanets with minimal S/N loss. We also developed a complete pipeline for the automated detection of point-source candidates, the calculation of receiver operating characteristics (ROC), contrast curves based on false positives, and completeness contours. We process in a uniform manner more than 330 data sets from the Gemini Planet Imager Exoplanet Survey and assess GPI typical sensitivity as a function of the star and the hypothetical companion spectral type. This work allows for the first time a comparison of different detection algorithms at a survey scale accounting for both planet completeness and false-positive rate. We show that the new forward model matched filter allows the detection of 50% fainter objects than a conventional cross-correlation technique with a Gaussian PSF template for the same false-positive rate.
NASA Astrophysics Data System (ADS)
Nezhadali, Azizollah; Motlagh, Maryam Omidvar; Sadeghzadeh, Samira
2018-02-01
A selective method based on molecularly imprinted polymer (MIP) solid-phase extraction (SPE) using UV-Vis spectrophotometry as a detection technique was developed for the determination of fluoxetine (FLU) in pharmaceutical and human serum samples. The MIPs were synthesized using pyrrole as a functional monomer in the presence of FLU as a template molecule. The factors that affecting the preparation and extraction ability of MIP such as amount of sorbent, initiator concentration, the amount of monomer to template ratio, uptake shaking rate, uptake time, washing buffer pH, take shaking rate, Taking time and polymerization time were considered for optimization. First a Plackett-Burman design (PBD) consists of 12 randomized runs were applied to determine the influence of each factor. The other optimization processes were performed using central composite design (CCD), artificial neural network (ANN) and genetic algorithm (GA). At optimal condition the calibration curve showed linearity over a concentration range of 10- 7-10- 8 M with a correlation coefficient (R2) of 0.9970. The limit of detection (LOD) for FLU was obtained 6.56 × 10- 9 M. The repeatability of the method was obtained 1.61%. The synthesized MIP sorbent showed a good selectivity and sensitivity toward FLU. The MIP/SPE method was used for the determination of FLU in pharmaceutical, serum and plasma samples, successfully.
A novel automated spike sorting algorithm with adaptable feature extraction.
Bestel, Robert; Daus, Andreas W; Thielemann, Christiane
2012-10-15
To study the electrophysiological properties of neuronal networks, in vitro studies based on microelectrode arrays have become a viable tool for analysis. Although in constant progress, a challenging task still remains in this area: the development of an efficient spike sorting algorithm that allows an accurate signal analysis at the single-cell level. Most sorting algorithms currently available only extract a specific feature type, such as the principal components or Wavelet coefficients of the measured spike signals in order to separate different spike shapes generated by different neurons. However, due to the great variety in the obtained spike shapes, the derivation of an optimal feature set is still a very complex issue that current algorithms struggle with. To address this problem, we propose a novel algorithm that (i) extracts a variety of geometric, Wavelet and principal component-based features and (ii) automatically derives a feature subset, most suitable for sorting an individual set of spike signals. Thus, there is a new approach that evaluates the probability distribution of the obtained spike features and consequently determines the candidates most suitable for the actual spike sorting. These candidates can be formed into an individually adjusted set of spike features, allowing a separation of the various shapes present in the obtained neuronal signal by a subsequent expectation maximisation clustering algorithm. Test results with simulated data files and data obtained from chick embryonic neurons cultured on microelectrode arrays showed an excellent classification result, indicating the superior performance of the described algorithm approach. Copyright © 2012 Elsevier B.V. All rights reserved.
A simple suboptimal least-squares algorithm for attitude determination with multiple sensors
NASA Technical Reports Server (NTRS)
Brozenec, Thomas F.; Bender, Douglas J.
1994-01-01
Three-axis attitude determination is equivalent to finding a coordinate transformation matrix which transforms a set of reference vectors fixed in inertial space to a set of measurement vectors fixed in the spacecraft. The attitude determination problem can be expressed as a constrained optimization problem. The constraint is that a coordinate transformation matrix must be proper, real, and orthogonal. A transformation matrix can be thought of as optimal in the least-squares sense if it maps the measurement vectors to the reference vectors with minimal 2-norm errors and meets the above constraint. This constrained optimization problem is known as Wahba's problem. Several algorithms which solve Wahba's problem exactly have been developed and used. These algorithms, while steadily improving, are all rather complicated. Furthermore, they involve such numerically unstable or sensitive operations as matrix determinant, matrix adjoint, and Newton-Raphson iterations. This paper describes an algorithm which minimizes Wahba's loss function, but without the constraint. When the constraint is ignored, the problem can be solved by a straightforward, numerically stable least-squares algorithm such as QR decomposition. Even though the algorithm does not explicitly take the constraint into account, it still yields a nearly orthogonal matrix for most practical cases; orthogonality only becomes corrupted when the sensor measurements are very noisy, on the same order of magnitude as the attitude rotations. The algorithm can be simplified if the attitude rotations are small enough so that the approximation sin(theta) approximately equals theta holds. We then compare the computational requirements for several well-known algorithms. For the general large-angle case, the QR least-squares algorithm is competitive with all other know algorithms and faster than most. If attitude rotations are small, the least-squares algorithm can be modified to run faster, and this modified algorithm is faster than all but a similarly specialized version of the QUEST algorithm. We also introduce a novel measurement averaging technique which reduces the n-measurement case to the two measurement case for our particular application, a star tracker and earth sensor mounted on an earth-pointed geosynchronous communications satellite. Using this technique, many n-measurement problems reduce to less than or equal to 3 measurements; this reduces the amount of required calculation without significant degradation in accuracy. Finally, we present the results of some tests which compare the least-squares algorithm with the QUEST and FOAM algorithms in the two-measurement case. For our example case, all three algorithms performed with similar accuracy.
Clustering-based Feature Learning on Variable Stars
NASA Astrophysics Data System (ADS)
Mackenzie, Cristóbal; Pichara, Karim; Protopapas, Pavlos
2016-04-01
The success of automatic classification of variable stars depends strongly on the lightcurve representation. Usually, lightcurves are represented as a vector of many descriptors designed by astronomers called features. These descriptors are expensive in terms of computing, require substantial research effort to develop, and do not guarantee a good classification. Today, lightcurve representation is not entirely automatic; algorithms must be designed and manually tuned up for every survey. The amounts of data that will be generated in the future mean astronomers must develop scalable and automated analysis pipelines. In this work we present a feature learning algorithm designed for variable objects. Our method works by extracting a large number of lightcurve subsequences from a given set, which are then clustered to find common local patterns in the time series. Representatives of these common patterns are then used to transform lightcurves of a labeled set into a new representation that can be used to train a classifier. The proposed algorithm learns the features from both labeled and unlabeled lightcurves, overcoming the bias using only labeled data. We test our method on data sets from the Massive Compact Halo Object survey and the Optical Gravitational Lensing Experiment; the results show that our classification performance is as good as and in some cases better than the performance achieved using traditional statistical features, while the computational cost is significantly lower. With these promising results, we believe that our method constitutes a significant step toward the automation of the lightcurve classification pipeline.
Si, Lei; Wang, Zhongbin; Liu, Xinhua; Tan, Chao; Liu, Ze; Xu, Jing
2016-01-01
Shearers play an important role in fully mechanized coal mining face and accurately identifying their cutting pattern is very helpful for improving the automation level of shearers and ensuring the safety of coal mining. The least squares support vector machine (LSSVM) has been proven to offer strong potential in prediction and classification issues, particularly by employing an appropriate meta-heuristic algorithm to determine the values of its two parameters. However, these meta-heuristic algorithms have the drawbacks of being hard to understand and reaching the global optimal solution slowly. In this paper, an improved fly optimization algorithm (IFOA) to optimize the parameters of LSSVM was presented and the LSSVM coupled with IFOA (IFOA-LSSVM) was used to identify the shearer cutting pattern. The vibration acceleration signals of five cutting patterns were collected and the special state features were extracted based on the ensemble empirical mode decomposition (EEMD) and the kernel function. Some examples on the IFOA-LSSVM model were further presented and the results were compared with LSSVM, PSO-LSSVM, GA-LSSVM and FOA-LSSVM models in detail. The comparison results indicate that the proposed approach was feasible, efficient and outperformed the others. Finally, an industrial application example at the coal mining face was demonstrated to specify the effect of the proposed system. PMID:26771615
Cheng, Jianhua; Wang, Tongda; Wang, Lu; Wang, Zhenmin
2017-10-23
Because of the harsh polar environment, the master strapdown inertial navigation system (SINS) has low accuracy and the system model information becomes abnormal. In this case, existing polar transfer alignment (TA) algorithms which use the measurement information provided by master SINS would lose their effectiveness. In this paper, a new polar TA algorithm with the aid of a star sensor and based on an adaptive unscented Kalman filter (AUKF) is proposed to deal with the problems. Since the measurement information provided by master SINS is inaccurate, the accurate information provided by the star sensor is chosen as the measurement. With the compensation of lever-arm effect and the model of star sensor, the nonlinear navigation equations are derived. Combined with the attitude matching method, the filter models for polar TA are designed. An AUKF is introduced to solve the abnormal information of system model. Then, the AUKF is used to estimate the states of TA. Results have demonstrated that the performance of the new polar TA algorithm is better than the state-of-the-art polar TA algorithms. Therefore, the new polar TA algorithm proposed in this paper is effectively to ensure and improve the accuracy of TA in the harsh polar environment.
Cheng, Jianhua; Wang, Tongda; Wang, Lu; Wang, Zhenmin
2017-01-01
Because of the harsh polar environment, the master strapdown inertial navigation system (SINS) has low accuracy and the system model information becomes abnormal. In this case, existing polar transfer alignment (TA) algorithms which use the measurement information provided by master SINS would lose their effectiveness. In this paper, a new polar TA algorithm with the aid of a star sensor and based on an adaptive unscented Kalman filter (AUKF) is proposed to deal with the problems. Since the measurement information provided by master SINS is inaccurate, the accurate information provided by the star sensor is chosen as the measurement. With the compensation of lever-arm effect and the model of star sensor, the nonlinear navigation equations are derived. Combined with the attitude matching method, the filter models for polar TA are designed. An AUKF is introduced to solve the abnormal information of system model. Then, the AUKF is used to estimate the states of TA. Results have demonstrated that the performance of the new polar TA algorithm is better than the state-of-the-art polar TA algorithms. Therefore, the new polar TA algorithm proposed in this paper is effectively to ensure and improve the accuracy of TA in the harsh polar environment. PMID:29065521
STARBLADE: STar and Artefact Removal with a Bayesian Lightweight Algorithm from Diffuse Emission
NASA Astrophysics Data System (ADS)
Knollmüller, Jakob; Frank, Philipp; Ensslin, Torsten A.
2018-05-01
STARBLADE (STar and Artefact Removal with a Bayesian Lightweight Algorithm from Diffuse Emission) separates superimposed point-like sources from a diffuse background by imposing physically motivated models as prior knowledge. The algorithm can also be used on noisy and convolved data, though performing a proper reconstruction including a deconvolution prior to the application of the algorithm is advised; the algorithm could also be used within a denoising imaging method. STARBLADE learns the correlation structure of the diffuse emission and takes it into account to determine the occurrence and strength of a superimposed point source.
NASA Astrophysics Data System (ADS)
Huang, Mingzhi; Zhang, Tao; Ruan, Jujun; Chen, Xiaohong
2017-01-01
A new efficient hybrid intelligent approach based on fuzzy wavelet neural network (FWNN) was proposed for effectively modeling and simulating biodegradation process of Dimethyl phthalate (DMP) in an anaerobic/anoxic/oxic (AAO) wastewater treatment process. With the self learning and memory abilities of neural networks (NN), handling uncertainty capacity of fuzzy logic (FL), analyzing local details superiority of wavelet transform (WT) and global search of genetic algorithm (GA), the proposed hybrid intelligent model can extract the dynamic behavior and complex interrelationships from various water quality variables. For finding the optimal values for parameters of the proposed FWNN, a hybrid learning algorithm integrating an improved genetic optimization and gradient descent algorithm is employed. The results show, compared with NN model (optimized by GA) and kinetic model, the proposed FWNN model have the quicker convergence speed, the higher prediction performance, and smaller RMSE (0.080), MSE (0.0064), MAPE (1.8158) and higher R2 (0.9851) values. which illustrates FWNN model simulates effluent DMP more accurately than the mechanism model.
Huang, Mingzhi; Zhang, Tao; Ruan, Jujun; Chen, Xiaohong
2017-01-01
A new efficient hybrid intelligent approach based on fuzzy wavelet neural network (FWNN) was proposed for effectively modeling and simulating biodegradation process of Dimethyl phthalate (DMP) in an anaerobic/anoxic/oxic (AAO) wastewater treatment process. With the self learning and memory abilities of neural networks (NN), handling uncertainty capacity of fuzzy logic (FL), analyzing local details superiority of wavelet transform (WT) and global search of genetic algorithm (GA), the proposed hybrid intelligent model can extract the dynamic behavior and complex interrelationships from various water quality variables. For finding the optimal values for parameters of the proposed FWNN, a hybrid learning algorithm integrating an improved genetic optimization and gradient descent algorithm is employed. The results show, compared with NN model (optimized by GA) and kinetic model, the proposed FWNN model have the quicker convergence speed, the higher prediction performance, and smaller RMSE (0.080), MSE (0.0064), MAPE (1.8158) and higher R2 (0.9851) values. which illustrates FWNN model simulates effluent DMP more accurately than the mechanism model. PMID:28120889
Optimality in Data Assimilation
NASA Astrophysics Data System (ADS)
Nearing, Grey; Yatheendradas, Soni
2016-04-01
It costs a lot more to develop and launch an earth-observing satellite than it does to build a data assimilation system. As such, we propose that it is important to understand the efficiency of our assimilation algorithms at extracting information from remote sensing retrievals. To address this, we propose that it is necessary to adopt completely general definition of "optimality" that explicitly acknowledges all differences between the parametric constraints of our assimilation algorithm (e.g., Gaussianity, partial linearity, Markovian updates) and the true nature of the environmetnal system and observing system. In fact, it is not only possible, but incredibly straightforward, to measure the optimality (in this more general sense) of any data assimilation algorithm as applied to any intended model or natural system. We measure the information content of remote sensing data conditional on the fact that we are already running a model and then measure the actual information extracted by data assimilation. The ratio of the two is an efficiency metric, and optimality is defined as occurring when the data assimilation algorithm is perfectly efficient at extracting information from the retrievals. We measure the information content of the remote sensing data in a way that, unlike triple collocation, does not rely on any a priori presumed relationship (e.g., linear) between the retrieval and the ground truth, however, like triple-collocation, is insensitive to the spatial mismatch between point-based measurements and grid-scale retrievals. This theory and method is therefore suitable for use with both dense and sparse validation networks. Additionally, the method we propose is *constructive* in the sense that it provides guidance on how to improve data assimilation systems. All data assimilation strategies can be reduced to approximations of Bayes' law, and we measure the fractions of total information loss that are due to individual assumptions or approximations in the prior (i.e., the model uncertainty distribution), and in the likelihood (i.e., the observation operator and observation uncertainty distribution). In this way, we can directly identify the parts of a data assimilation algorithm that contribute most to assimilation error in a way that (unlike traditional DA performance metrics) considers nonlinearity in the model and observation and non-optimality in the fit between filter assumptions and the real system. To reiterate, the method we propose is theoretically rigorous but also dead-to-rights simple, and can be implemented in no more than a few hours by a competent programmer. We use this to show that careful applications of the Ensemble Kalman Filter use substantially less than half of the information contained in remote sensing soil moisture retrievals (LPRM, AMSR-E, SMOS, and SMOPS). We propose that this finding may explain some of the results from several recent large-scale experiments that show lower-than-expected value to assimilating soil moisture retrievals into land surface models forced by high-quality precipitation data. Our results have important implications for the SMAP mission because over half of the SMAP-affiliated "early adopters" plan to use the EnKF as their primary method for extracting information from SMAP retrievals.
Texas two-step: a framework for optimal multi-input single-output deconvolution.
Neelamani, Ramesh; Deffenbaugh, Max; Baraniuk, Richard G
2007-11-01
Multi-input single-output deconvolution (MISO-D) aims to extract a deblurred estimate of a target signal from several blurred and noisy observations. This paper develops a new two step framework--Texas Two-Step--to solve MISO-D problems with known blurs. Texas Two-Step first reduces the MISO-D problem to a related single-input single-output deconvolution (SISO-D) problem by invoking the concept of sufficient statistics (SSs) and then solves the simpler SISO-D problem using an appropriate technique. The two-step framework enables new MISO-D techniques (both optimal and suboptimal) based on the rich suite of existing SISO-D techniques. In fact, the properties of SSs imply that a MISO-D algorithm is mean-squared-error optimal if and only if it can be rearranged to conform to the Texas Two-Step framework. Using this insight, we construct new wavelet- and curvelet-based MISO-D algorithms with asymptotically optimal performance. Simulated and real data experiments verify that the framework is indeed effective.
Wang, Jie-sheng; Han, Shuang; Shen, Na-na
2014-01-01
For predicting the key technology indicators (concentrate grade and tailings recovery rate) of flotation process, an echo state network (ESN) based fusion soft-sensor model optimized by the improved glowworm swarm optimization (GSO) algorithm is proposed. Firstly, the color feature (saturation and brightness) and texture features (angular second moment, sum entropy, inertia moment, etc.) based on grey-level co-occurrence matrix (GLCM) are adopted to describe the visual characteristics of the flotation froth image. Then the kernel principal component analysis (KPCA) method is used to reduce the dimensionality of the high-dimensional input vector composed by the flotation froth image characteristics and process datum and extracts the nonlinear principal components in order to reduce the ESN dimension and network complex. The ESN soft-sensor model of flotation process is optimized by the GSO algorithm with congestion factor. Simulation results show that the model has better generalization and prediction accuracy to meet the online soft-sensor requirements of the real-time control in the flotation process. PMID:24982935
SVM-Based Synthetic Fingerprint Discrimination Algorithm and Quantitative Optimization Strategy
Chen, Suhang; Chang, Sheng; Huang, Qijun; He, Jin; Wang, Hao; Huang, Qiangui
2014-01-01
Synthetic fingerprints are a potential threat to automatic fingerprint identification systems (AFISs). In this paper, we propose an algorithm to discriminate synthetic fingerprints from real ones. First, four typical characteristic factors—the ridge distance features, global gray features, frequency feature and Harris Corner feature—are extracted. Then, a support vector machine (SVM) is used to distinguish synthetic fingerprints from real fingerprints. The experiments demonstrate that this method can achieve a recognition accuracy rate of over 98% for two discrete synthetic fingerprint databases as well as a mixed database. Furthermore, a performance factor that can evaluate the SVM's accuracy and efficiency is presented, and a quantitative optimization strategy is established for the first time. After the optimization of our synthetic fingerprint discrimination task, the polynomial kernel with a training sample proportion of 5% is the optimized value when the minimum accuracy requirement is 95%. The radial basis function (RBF) kernel with a training sample proportion of 15% is a more suitable choice when the minimum accuracy requirement is 98%. PMID:25347063
System identification using Nuclear Norm & Tabu Search optimization
NASA Astrophysics Data System (ADS)
Ahmed, Asif A.; Schoen, Marco P.; Bosworth, Ken W.
2018-01-01
In recent years, subspace System Identification (SI) algorithms have seen increased research, stemming from advanced minimization methods being applied to the Nuclear Norm (NN) approach in system identification. These minimization algorithms are based on hard computing methodologies. To the authors’ knowledge, as of now, there has been no work reported that utilizes soft computing algorithms to address the minimization problem within the nuclear norm SI framework. A linear, time-invariant, discrete time system is used in this work as the basic model for characterizing a dynamical system to be identified. The main objective is to extract a mathematical model from collected experimental input-output data. Hankel matrices are constructed from experimental data, and the extended observability matrix is employed to define an estimated output of the system. This estimated output and the actual - measured - output are utilized to construct a minimization problem. An embedded rank measure assures minimum state realization outcomes. Current NN-SI algorithms employ hard computing algorithms for minimization. In this work, we propose a simple Tabu Search (TS) algorithm for minimization. TS algorithm based SI is compared with the iterative Alternating Direction Method of Multipliers (ADMM) line search optimization based NN-SI. For comparison, several different benchmark system identification problems are solved by both approaches. Results show improved performance of the proposed SI-TS algorithm compared to the NN-SI ADMM algorithm.
NASA Astrophysics Data System (ADS)
Sreekanth, J.; Datta, Bithin
2011-07-01
Overexploitation of the coastal aquifers results in saltwater intrusion. Once saltwater intrusion occurs, it involves huge cost and long-term remediation measures to remediate these contaminated aquifers. Hence, it is important to have strategies for the sustainable use of coastal aquifers. This study develops a methodology for the optimal management of saltwater intrusion prone aquifers. A linked simulation-optimization-based management strategy is developed. The methodology uses genetic-programming-based models for simulating the aquifer processes, which is then linked to a multi-objective genetic algorithm to obtain optimal management strategies in terms of groundwater extraction from potential well locations in the aquifer.
MARVELS 1D Pipeline Development, Optimization, and Performance
NASA Astrophysics Data System (ADS)
Thomas, Neil; Ge, Jian; Grieves, Nolan; Li, Rui; Sithajan, Sirinrat
2016-04-01
We describe the processing pipeline of one-dimensional spectra from the SDSS III Multi-object APO Radial Velocity Exoplanet Large-area Survey (MARVELS). This medium-resolution interferometric spectroscopic survey observed over 3300 stars over the course of four years with the primary goal of detecting and characterizing giant planets (>0.5 M Jup) from within a large, homogeneous sample of FGK stars. The successful extraction of radial velocities (RVs) from MARVELS is complicated by several instrument effects. The wide field nature of this multi-object spectrograph provides spectra that are initially distorted and require conditioning of the raw images for precise RV extraction. Also, the simultaneous observation of sixty stars per exposure leads to several effects not typically seen in a single-object instrument. For instance, fiber illumination changes over time can easily create the dominant source of RV measurement error when these changes are different for the stellar and calibration optical paths. We present a method for statistically quantifying these instrument effects to combat the difficulty of giant planet detection due to systematic RV errors. We also present an overview of the performance of the entire survey as it stands for the SDSS III DR 12 as well as key results from the very latest improvements. This includes a novel technique, called lucky RV, by which stable regions of spectra can be statistically determined and emphasized during RV extraction, leading to a large reduction of the long-term RV offsets in the MARVELS data. These improved RV data are to be released via NASA Exoplanet Archive in the fall of 2015.
NASA Astrophysics Data System (ADS)
Opromolla, Roberto; Fasano, Giancarmine; Rufino, Giancarlo; Grassi, Michele; Pernechele, Claudio; Dionisio, Cesare
2017-11-01
This paper presents an innovative algorithm developed for attitude determination of a space platform. The algorithm exploits images taken from a multi-purpose panoramic camera equipped with hyper-hemispheric lens and used as star tracker. The sensor architecture is also original since state-of-the-art star trackers accurately image as many stars as possible within a narrow- or medium-size field-of-view, while the considered sensor observes an extremely large portion of the celestial sphere but its observation capabilities are limited by the features of the optical system. The proposed original approach combines algorithmic concepts, like template matching and point cloud registration, inherited from the computer vision and robotic research fields, to carry out star identification. The final aim is to provide a robust and reliable initial attitude solution (lost-in-space mode), with a satisfactory accuracy level in view of the multi-purpose functionality of the sensor and considering its limitations in terms of resolution and sensitivity. Performance evaluation is carried out within a simulation environment in which the panoramic camera operation is realistically reproduced, including perturbations in the imaged star pattern. Results show that the presented algorithm is able to estimate attitude with accuracy better than 1° with a success rate around 98% evaluated by densely covering the entire space of the parameters representing the camera pointing in the inertial space.
Unsupervised classification of variable stars
NASA Astrophysics Data System (ADS)
Valenzuela, Lucas; Pichara, Karim
2018-03-01
During the past 10 years, a considerable amount of effort has been made to develop algorithms for automatic classification of variable stars. That has been primarily achieved by applying machine learning methods to photometric data sets where objects are represented as light curves. Classifiers require training sets to learn the underlying patterns that allow the separation among classes. Unfortunately, building training sets is an expensive process that demands a lot of human efforts. Every time data come from new surveys; the only available training instances are the ones that have a cross-match with previously labelled objects, consequently generating insufficient training sets compared with the large amounts of unlabelled sources. In this work, we present an algorithm that performs unsupervised classification of variable stars, relying only on the similarity among light curves. We tackle the unsupervised classification problem by proposing an untraditional approach. Instead of trying to match classes of stars with clusters found by a clustering algorithm, we propose a query-based method where astronomers can find groups of variable stars ranked by similarity. We also develop a fast similarity function specific for light curves, based on a novel data structure that allows scaling the search over the entire data set of unlabelled objects. Experiments show that our unsupervised model achieves high accuracy in the classification of different types of variable stars and that the proposed algorithm scales up to massive amounts of light curves.
Detecting microsatellites within genomes: significant variation among algorithms.
Leclercq, Sébastien; Rivals, Eric; Jarne, Philippe
2007-04-18
Microsatellites are short, tandemly-repeated DNA sequences which are widely distributed among genomes. Their structure, role and evolution can be analyzed based on exhaustive extraction from sequenced genomes. Several dedicated algorithms have been developed for this purpose. Here, we compared the detection efficiency of five of them (TRF, Mreps, Sputnik, STAR, and RepeatMasker). Our analysis was first conducted on the human X chromosome, and microsatellite distributions were characterized by microsatellite number, length, and divergence from a pure motif. The algorithms work with user-defined parameters, and we demonstrate that the parameter values chosen can strongly influence microsatellite distributions. The five algorithms were then compared by fixing parameters settings, and the analysis was extended to three other genomes (Saccharomyces cerevisiae, Neurospora crassa and Drosophila melanogaster) spanning a wide range of size and structure. Significant differences for all characteristics of microsatellites were observed among algorithms, but not among genomes, for both perfect and imperfect microsatellites. Striking differences were detected for short microsatellites (below 20 bp), regardless of motif. Since the algorithm used strongly influences empirical distributions, studies analyzing microsatellite evolution based on a comparison between empirical and theoretical size distributions should therefore be considered with caution. We also discuss why a typological definition of microsatellites limits our capacity to capture their genomic distributions.
Detecting microsatellites within genomes: significant variation among algorithms
Leclercq, Sébastien; Rivals, Eric; Jarne, Philippe
2007-01-01
Background Microsatellites are short, tandemly-repeated DNA sequences which are widely distributed among genomes. Their structure, role and evolution can be analyzed based on exhaustive extraction from sequenced genomes. Several dedicated algorithms have been developed for this purpose. Here, we compared the detection efficiency of five of them (TRF, Mreps, Sputnik, STAR, and RepeatMasker). Results Our analysis was first conducted on the human X chromosome, and microsatellite distributions were characterized by microsatellite number, length, and divergence from a pure motif. The algorithms work with user-defined parameters, and we demonstrate that the parameter values chosen can strongly influence microsatellite distributions. The five algorithms were then compared by fixing parameters settings, and the analysis was extended to three other genomes (Saccharomyces cerevisiae, Neurospora crassa and Drosophila melanogaster) spanning a wide range of size and structure. Significant differences for all characteristics of microsatellites were observed among algorithms, but not among genomes, for both perfect and imperfect microsatellites. Striking differences were detected for short microsatellites (below 20 bp), regardless of motif. Conclusion Since the algorithm used strongly influences empirical distributions, studies analyzing microsatellite evolution based on a comparison between empirical and theoretical size distributions should therefore be considered with caution. We also discuss why a typological definition of microsatellites limits our capacity to capture their genomic distributions. PMID:17442102
Research on sparse feature matching of improved RANSAC algorithm
NASA Astrophysics Data System (ADS)
Kong, Xiangsi; Zhao, Xian
2018-04-01
In this paper, a sparse feature matching method based on modified RANSAC algorithm is proposed to improve the precision and speed. Firstly, the feature points of the images are extracted using the SIFT algorithm. Then, the image pair is matched roughly by generating SIFT feature descriptor. At last, the precision of image matching is optimized by the modified RANSAC algorithm,. The RANSAC algorithm is improved from three aspects: instead of the homography matrix, this paper uses the fundamental matrix generated by the 8 point algorithm as the model; the sample is selected by a random block selecting method, which ensures the uniform distribution and the accuracy; adds sequential probability ratio test(SPRT) on the basis of standard RANSAC, which cut down the overall running time of the algorithm. The experimental results show that this method can not only get higher matching accuracy, but also greatly reduce the computation and improve the matching speed.
Intelligent error correction method applied on an active pixel sensor based star tracker
NASA Astrophysics Data System (ADS)
Schmidt, Uwe
2005-10-01
Star trackers are opto-electronic sensors used on-board of satellites for the autonomous inertial attitude determination. During the last years star trackers became more and more important in the field of the attitude and orbit control system (AOCS) sensors. High performance star trackers are based up today on charge coupled device (CCD) optical camera heads. The active pixel sensor (APS) technology, introduced in the early 90-ties, allows now the beneficial replacement of CCD detectors by APS detectors with respect to performance, reliability, power, mass and cost. The company's heritage in star tracker design started in the early 80-ties with the launch of the worldwide first fully autonomous star tracker system ASTRO1 to the Russian MIR space station. Jena-Optronik recently developed an active pixel sensor based autonomous star tracker "ASTRO APS" as successor of the CCD based star tracker product series ASTRO1, ASTRO5, ASTRO10 and ASTRO15. Key features of the APS detector technology are, a true xy-address random access, the multiple windowing read out and the on-chip signal processing including the analogue to digital conversion. These features can be used for robust star tracking at high slew rates and under worse conditions like stray light and solar flare induced single event upsets. A special algorithm have been developed to manage the typical APS detector error contributors like fixed pattern noise (FPN), dark signal non-uniformity (DSNU) and white spots. The algorithm works fully autonomous and adapts to e.g. increasing DSNU and up-coming white spots automatically without ground maintenance or re-calibration. In contrast to conventional correction methods the described algorithm does not need calibration data memory like full image sized calibration data sets. The application of the presented algorithm managing the typical APS detector error contributors is a key element for the design of star trackers for long term satellite applications like geostationary telecom platforms.
NASA Astrophysics Data System (ADS)
Wu, Jianfeng; Zheng, Li; Liu, Depeng
2007-11-01
Gaoqing Plain is a major agriculture center of Shandong Province in northern China. Over the last 30 years, the diversion of Yellow River water for intensive irrigation in Gaoqing Plain has led to elevation of the water table and increased evaporation, and subsequently, a dramatic increase in salt content in soil and rapid degradation of crop productivity. Optimal strategies have been explored, that will balance the need to extract sufficient groundwater for irrigation (to ease the pressure on diverting Yellow River water) with the need to improve the local environment by appropriately lowering the water table. Two simulation-optimization models have been formulated and a genetic algorithm (GA) is applied to search for the optimal groundwater development strategies in Gaoqing Plain, while keeping the adverse environmental impacts in check. Compared with the trial-and-error approach of previous studies, the optimization results demonstrate that using an optimization model coupled with a GA search is both effective and efficient. The optimal solutions identified by the GA will provide Gaoqing Plain with the blueprints for developing sustainable groundwater abstraction plans to support local economic development and improve its environmental quality.
Wavefront Sensing for WFIRST with a Linear Optical Model
NASA Technical Reports Server (NTRS)
Jurling, Alden S.; Content, David A.
2012-01-01
In this paper we develop methods to use a linear optical model to capture the field dependence of wavefront aberrations in a nonlinear optimization-based phase retrieval algorithm for image-based wavefront sensing. The linear optical model is generated from a ray trace model of the system and allows the system state to be described in terms of mechanical alignment parameters rather than wavefront coefficients. This approach allows joint optimization over images taken at different field points and does not require separate convergence of phase retrieval at individual field points. Because the algorithm exploits field diversity, multiple defocused images per field point are not required for robustness. Furthermore, because it is possible to simultaneously fit images of many stars over the field, it is not necessary to use a fixed defocus to achieve adequate signal-to-noise ratio despite having images with high dynamic range. This allows high performance wavefront sensing using in-focus science data. We applied this technique in a simulation model based on the Wide Field Infrared Survey Telescope (WFIRST) Intermediate Design Reference Mission (IDRM) imager using a linear optical model with 25 field points. We demonstrate sub-thousandth-wave wavefront sensing accuracy in the presence of noise and moderate undersampling for both monochromatic and polychromatic images using 25 high-SNR target stars. Using these high-quality wavefront sensing results, we are able to generate upsampled point-spread functions (PSFs) and use them to determine PSF ellipticity to high accuracy in order to reduce the systematic impact of aberrations on the accuracy of galactic ellipticity determination for weak-lensing science.
RED RUNAWAYS II: LOW-MASS HILLS STARS IN SDSS STRIPE 82
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Yanqiong; Smith, Martin C.; Carlin, Jeffrey L., E-mail: zhangyq@shao.ac.cn, E-mail: msmith@shao.ac.cn
Stars ejected from the Galactic Center can be used to place important constraints on the Milky Way potential. Since existing hypervelocity stars are too distant to accurately determine orbits, we have conducted a search for nearby candidates using full three-dimensional velocities. Since the efficacy of such studies is often hampered by deficiencies in proper motion catalogs, we have chosen to utilize the reliable, high-precision Sloan Digital Sky Survey (SDSS) Stripe 82 proper motion catalog. Although we do not find any candidates which have velocities in excess of the escape speed, we identify 226 stars on orbits that are consistent withmore » Galactic Center ejection. This number is significantly larger than what we would expect for halo stars on radial orbits and cannot be explained by disk or bulge contamination. If we restrict ourselves to metal-rich stars, we find 29 candidates with [Fe/H] > −0.8 dex and 10 with [Fe/H] > −0.6 dex. Their metallicities are more consistent with what we expect for bulge ejecta, and so we believe these candidates are especially deserving of further study. We have supplemented this sample using our own radial velocities, developing an algorithm to use proper motions for optimizing candidate selection. This technique provides considerable improvement on the blind spectroscopic sample of SDSS, being able to identify candidates with an efficiency around 20 times better than a blind search.« less
New Techniques for High-contrast Imaging with ADI: The ACORNS-ADI SEEDS Data Reduction Pipeline
NASA Astrophysics Data System (ADS)
Brandt, Timothy D.; McElwain, Michael W.; Turner, Edwin L.; Abe, L.; Brandner, W.; Carson, J.; Egner, S.; Feldt, M.; Golota, T.; Goto, M.; Grady, C. A.; Guyon, O.; Hashimoto, J.; Hayano, Y.; Hayashi, M.; Hayashi, S.; Henning, T.; Hodapp, K. W.; Ishii, M.; Iye, M.; Janson, M.; Kandori, R.; Knapp, G. R.; Kudo, T.; Kusakabe, N.; Kuzuhara, M.; Kwon, J.; Matsuo, T.; Miyama, S.; Morino, J.-I.; Moro-Martín, A.; Nishimura, T.; Pyo, T.-S.; Serabyn, E.; Suto, H.; Suzuki, R.; Takami, M.; Takato, N.; Terada, H.; Thalmann, C.; Tomono, D.; Watanabe, M.; Wisniewski, J. P.; Yamada, T.; Takami, H.; Usuda, T.; Tamura, M.
2013-02-01
We describe Algorithms for Calibration, Optimized Registration, and Nulling the Star in Angular Differential Imaging (ACORNS-ADI), a new, parallelized software package to reduce high-contrast imaging data, and its application to data from the SEEDS survey. We implement several new algorithms, including a method to register saturated images, a trimmed mean for combining an image sequence that reduces noise by up to ~20%, and a robust and computationally fast method to compute the sensitivity of a high-contrast observation everywhere on the field of view without introducing artificial sources. We also include a description of image processing steps to remove electronic artifacts specific to Hawaii2-RG detectors like the one used for SEEDS, and a detailed analysis of the Locally Optimized Combination of Images (LOCI) algorithm commonly used to reduce high-contrast imaging data. ACORNS-ADI is written in python. It is efficient and open-source, and includes several optional features which may improve performance on data from other instruments. ACORNS-ADI requires minimal modification to reduce data from instruments other than HiCIAO. It is freely available for download at www.github.com/t-brandt/acorns-adi under a Berkeley Software Distribution (BSD) license. Based on data collected at Subaru Telescope, which is operated by the National Astronomical Observatory of Japan.
Topic Transition in Educational Videos Using Visually Salient Words
ERIC Educational Resources Information Center
Gandhi, Ankit; Biswas, Arijit; Deshmukh, Om
2015-01-01
In this paper, we propose a visual saliency algorithm for automatically finding the topic transition points in an educational video. First, we propose a method for assigning a saliency score to each word extracted from an educational video. We design several mid-level features that are indicative of visual saliency. The optimal feature combination…
NASA Astrophysics Data System (ADS)
Jiang, Li; Shi, Tielin; Xuan, Jianping
2012-05-01
Generally, the vibration signals of fault bearings are non-stationary and highly nonlinear under complicated operating conditions. Thus, it's a big challenge to extract optimal features for improving classification and simultaneously decreasing feature dimension. Kernel Marginal Fisher analysis (KMFA) is a novel supervised manifold learning algorithm for feature extraction and dimensionality reduction. In order to avoid the small sample size problem in KMFA, we propose regularized KMFA (RKMFA). A simple and efficient intelligent fault diagnosis method based on RKMFA is put forward and applied to fault recognition of rolling bearings. So as to directly excavate nonlinear features from the original high-dimensional vibration signals, RKMFA constructs two graphs describing the intra-class compactness and the inter-class separability, by combining traditional manifold learning algorithm with fisher criteria. Therefore, the optimal low-dimensional features are obtained for better classification and finally fed into the simplest K-nearest neighbor (KNN) classifier to recognize different fault categories of bearings. The experimental results demonstrate that the proposed approach improves the fault classification performance and outperforms the other conventional approaches.
NASA Astrophysics Data System (ADS)
Qarib, Hossein; Adeli, Hojjat
2015-12-01
In this paper authors introduce a new adaptive signal processing technique for feature extraction and parameter estimation in noisy exponentially damped signals. The iterative 3-stage method is based on the adroit integration of the strengths of parametric and nonparametric methods such as multiple signal categorization, matrix pencil, and empirical mode decomposition algorithms. The first stage is a new adaptive filtration or noise removal scheme. The second stage is a hybrid parametric-nonparametric signal parameter estimation technique based on an output-only system identification technique. The third stage is optimization of estimated parameters using a combination of the primal-dual path-following interior point algorithm and genetic algorithm. The methodology is evaluated using a synthetic signal and a signal obtained experimentally from transverse vibrations of a steel cantilever beam. The method is successful in estimating the frequencies accurately. Further, it estimates the damping exponents. The proposed adaptive filtration method does not include any frequency domain manipulation. Consequently, the time domain signal is not affected as a result of frequency domain and inverse transformations.
Shirodkar, Priyanka V; Muraleedharan, Usha Devi
2017-11-26
Amylases are a group of enzymes with a wide variety of industrial applications. Enhancement of α-amylase production from the marine protists, thraustochytrids has been attempted for the first time by applying statistical-based experimental designs using response surface methodology (RSM) and genetic algorithm (GA) for optimization of the most influencing process variables. A full factorial central composite experimental design was used to study the cumulative interactive effect of nutritional components viz., glucose, corn starch, and yeast extract. RSM was performed on two objectives, that is, growth of Ulkenia sp. AH-2 (ATCC® PRA-296) and α-amylase activity. When GA was conducted for maximization of the enzyme activity, the optimal α-amylase activity was found to be 71.20 U/mL which was close to that obtained by RSM (71.93 U/mL), both of which were in agreement with the predicted value of 72.37 U/mL. Optimal growth at the optimized process variables was found to be 1.89A 660nm . The optimized medium increased α-amylase production by 1.2-fold.
Lu, Wenlong; Xie, Junwei; Wang, Heming; Sheng, Chuan
2016-01-01
Inspired by track-before-detection technology in radar, a novel time-frequency transform, namely polynomial chirping Fourier transform (PCFT), is exploited to extract components from noisy multicomponent signal. The PCFT combines advantages of Fourier transform and polynomial chirplet transform to accumulate component energy along a polynomial chirping curve in the time-frequency plane. The particle swarm optimization algorithm is employed to search optimal polynomial parameters with which the PCFT will achieve a most concentrated energy ridge in the time-frequency plane for the target component. The component can be well separated in the polynomial chirping Fourier domain with a narrow-band filter and then reconstructed by inverse PCFT. Furthermore, an iterative procedure, involving parameter estimation, PCFT, filtering and recovery, is introduced to extract components from a noisy multicomponent signal successively. The Simulations and experiments show that the proposed method has better performance in component extraction from noisy multicomponent signal as well as provides more time-frequency details about the analyzed signal than conventional methods.
Dispatch Scheduling to Maximize Exoplanet Detection
NASA Astrophysics Data System (ADS)
Johnson, Samson; McCrady, Nate; MINERVA
2016-01-01
MINERVA is a dedicated exoplanet detection telescope array using radial velocity measurements of nearby stars to detect planets. MINERVA will be a completely robotic facility, with a goal of maximizing the number of exoplanets detected. MINERVA requires a unique application of queue scheduling due to its automated nature and the requirement of high cadence observations. A dispatch scheduling algorithm is employed to create a dynamic and flexible selector of targets to observe, in which stars are chosen by assigning values through a weighting function. I designed and have begun testing a simulation which implements the functions of a dispatch scheduler and records observations based on target selections through the same principles that will be used at the commissioned site. These results will be used in a larger simulation that incorporates weather, planet occurrence statistics, and stellar noise to test the planet detection capabilities of MINERVA. This will be used to heuristically determine an optimal observing strategy for the MINERVA project.
A new logistic dynamic particle swarm optimization algorithm based on random topology.
Ni, Qingjian; Deng, Jianming
2013-01-01
Population topology of particle swarm optimization (PSO) will directly affect the dissemination of optimal information during the evolutionary process and will have a significant impact on the performance of PSO. Classic static population topologies are usually used in PSO, such as fully connected topology, ring topology, star topology, and square topology. In this paper, the performance of PSO with the proposed random topologies is analyzed, and the relationship between population topology and the performance of PSO is also explored from the perspective of graph theory characteristics in population topologies. Further, in a relatively new PSO variant which named logistic dynamic particle optimization, an extensive simulation study is presented to discuss the effectiveness of the random topology and the design strategies of population topology. Finally, the experimental data are analyzed and discussed. And about the design and use of population topology on PSO, some useful conclusions are proposed which can provide a basis for further discussion and research.
Auzias, G; Brun, L; Deruelle, C; Coulon, O
2015-05-01
Recent interest has been growing concerning points of maximum depth within folds, the sulcal pits, that can be used as reliable cortical landmarks. These remarkable points on the cortical surface are defined algorithmically as the outcome of an automatic extraction procedure. The influence of several crucial parameters of the reference technique (Im et al., 2010) has not been evaluated extensively, and no optimization procedure has been proposed so far. Designing an appropriate optimization framework for these parameters is mandatory to guarantee the reproducibility of results across studies and to ensure the feasibility of sulcal pit extraction and analysis on large cohorts. In this work, we propose a framework specifically dedicated to the optimization of the parameters of the method. This optimization framework relies on new measures for better quantifying the reproducibility of the number of sulcal pits per region across individuals, in line with the assumptions of one-to-one correspondence of sulcal roots across individuals which is an explicit aspect of the sulcal roots model (Régis et al., 2005). Our procedure benefits from a combination of improvements, including the use of a convenient sulcal depth estimation and is methodologically sound. Our experiments on two different groups of individuals, with a total of 137 subjects, show an increased reliability across subjects in deeper sulcal pits, as compared to the previous approach, and cover the entire cortical surface, including shallower and more variable folds that were not considered before. The effectiveness of our method ensures the feasibility of a systematic study of sulcal pits on large cohorts. On top of these methodological advances, we quantify the relationship between the reproducibility of the number of sulcal pits per region across individuals and their respective depth and demonstrate the relatively high reproducibility of several pits corresponding to shallower folds. Finally, we report new results regarding the local pit asymmetry, providing evidence that the algorithmic and conceptual approach defended here may contribute to better understanding of the key role of sulcal pits in neuroanatomy. Copyright © 2015 Elsevier Inc. All rights reserved.
The Double Star Orbit Initial Value Problem
NASA Astrophysics Data System (ADS)
Hensley, Hagan
2018-04-01
Many precise algorithms exist to find a best-fit orbital solution for a double star system given a good enough initial value. Desmos is an online graphing calculator tool with extensive capabilities to support animations and defining functions. It can provide a useful visual means of analyzing double star data to arrive at a best guess approximation of the orbital solution. This is a necessary requirement before using a gradient-descent algorithm to find the best-fit orbital solution for a binary system.
NASA Astrophysics Data System (ADS)
Destrez, Raphaël.; Albouy-Kissi, Benjamin; Treuillet, Sylvie; Lucas, Yves
2015-04-01
Computer aided planning for orthodontic treatment requires knowing occlusion of separately scanned dental casts. A visual guided registration is conducted starting by extracting corresponding features in both photographs and 3D scans. To achieve this, dental neck and occlusion surface are firstly extracted by image segmentation and 3D curvature analysis. Then, an iterative registration process is conducted during which feature positions are refined, guided by previously found anatomic edges. The occlusal edge image detection is improved by an original algorithm which follows Canny's poorly detected edges using a priori knowledge of tooth shapes. Finally, the influence of feature extraction and position optimization is evaluated in terms of the quality of the induced registration. Best combination of feature detection and optimization leads to a positioning average error of 1.10 mm and 2.03°.
Huang, Jie; Shi, Tielin; Tang, Zirong; Zhu, Wei; Liao, Guanglan; Li, Xiaoping; Gong, Bo; Zhou, Tengyuan
2017-08-01
We propose a bi-objective optimization model for extracting optical fiber background from the measured surface-enhanced Raman spectroscopy (SERS) spectrum of the target sample in the application of fiber optic SERS. The model is built using curve fitting to resolve the SERS spectrum into several individual bands, and simultaneously matching some resolved bands with the measured background spectrum. The Pearson correlation coefficient is selected as the similarity index and its maximum value is pursued during the spectral matching process. An algorithm is proposed, programmed, and demonstrated successfully in extracting optical fiber background or fluorescence background from the measured SERS spectra of rhodamine 6G (R6G) and crystal violet (CV). The proposed model not only can be applied to remove optical fiber background or fluorescence background for SERS spectra, but also can be transferred to conventional Raman spectra recorded using fiber optic instrumentation.
Vicinal light inspection of translucent materials
Burns, Geroge R [Albuquerque, NM; Yang, Pin [Albuquerque, NM
2010-01-19
The present invention includes methods and apparatus for inspecting vicinally illuminated non-patterned areas of translucent materials. An initial image of the material is received. A second image is received following a relative translation between the material being inspected and a device generating the images. Each vicinally illuminated image includes a portion having optimal illumination, that can be extracted and stored in a composite image of the non-patterned area. The composite image includes aligned portions of the extracted image portions, and provides a composite having optimal illumination over a non-patterned area of the material to be inspected. The composite image can be processed by enhancement and object detection algorithms, to determine the presence of, and characterize any inhomogeneities present in the material.
Zheng, Wenming; Lin, Zhouchen; Wang, Haixian
2014-04-01
A novel discriminant analysis criterion is derived in this paper under the theoretical framework of Bayes optimality. In contrast to the conventional Fisher's discriminant criterion, the major novelty of the proposed one is the use of L1 norm rather than L2 norm, which makes it less sensitive to the outliers. With the L1-norm discriminant criterion, we propose a new linear discriminant analysis (L1-LDA) method for linear feature extraction problem. To solve the L1-LDA optimization problem, we propose an efficient iterative algorithm, in which a novel surrogate convex function is introduced such that the optimization problem in each iteration is to simply solve a convex programming problem and a close-form solution is guaranteed to this problem. Moreover, we also generalize the L1-LDA method to deal with the nonlinear robust feature extraction problems via the use of kernel trick, and hereafter proposed the L1-norm kernel discriminant analysis (L1-KDA) method. Extensive experiments on simulated and real data sets are conducted to evaluate the effectiveness of the proposed method in comparing with the state-of-the-art methods.
Simultaneous parameter optimization of x-ray and neutron reflectivity data using genetic algorithms
DOE Office of Scientific and Technical Information (OSTI.GOV)
Singh, Surendra, E-mail: surendra@barc.gov.in; Basu, Saibal
2016-05-23
X-ray and neutron reflectivity are two non destructive techniques which provide a wealth of information on thickness, structure and interracial properties in nanometer length scale. Combination of X-ray and neutron reflectivity is well suited for obtaining physical parameters of nanostructured thin films and superlattices. Neutrons provide a different contrast between the elements than X-rays and are also sensitive to the magnetization depth profile in thin films and superlattices. The real space information is extracted by fitting a model for the structure of the thin film sample in reflectometry experiments. We have applied a Genetic Algorithms technique to extract depth dependentmore » structure and magnetic in thin film and multilayer systems by simultaneously fitting X-ray and neutron reflectivity data.« less
Playing biology's name game: identifying protein names in scientific text.
Hanisch, Daniel; Fluck, Juliane; Mevissen, Heinz-Theodor; Zimmer, Ralf
2003-01-01
A growing body of work is devoted to the extraction of protein or gene interaction information from the scientific literature. Yet, the basis for most extraction algorithms, i.e. the specific and sensitive recognition of protein and gene names and their numerous synonyms, has not been adequately addressed. Here we describe the construction of a comprehensive general purpose name dictionary and an accompanying automatic curation procedure based on a simple token model of protein names. We designed an efficient search algorithm to analyze all abstracts in MEDLINE in a reasonable amount of time on standard computers. The parameters of our method are optimized using machine learning techniques. Used in conjunction, these ingredients lead to good search performance. A supplementary web page is available at http://cartan.gmd.de/ProMiner/.
Improvements in Space Surveillance Processing for Wide Field of View Optical Sensors
NASA Astrophysics Data System (ADS)
Sydney, P.; Wetterer, C.
2014-09-01
For more than a decade, an autonomous satellite tracking system at the Air Force Maui Optical and Supercomputing (AMOS) observatory has been generating routine astrometric measurements of Earth-orbiting Resident Space Objects (RSOs) using small commercial telescopes and sensors. Recent work has focused on developing an improved processing system, enhancing measurement performance and response while supporting other sensor systems and missions. This paper will outline improved techniques in scheduling, detection, astrometric and photometric measurements, and catalog maintenance. The processing system now integrates with Special Perturbation (SP) based astrodynamics algorithms, allowing covariance-based scheduling and more precise orbital estimates and object identification. A merit-based scheduling algorithm provides a global optimization framework to support diverse collection tasks and missions. The detection algorithms support a range of target tracking and camera acquisition rates. New comprehensive star catalogs allow for more precise astrometric and photometric calibrations including differential photometry for monitoring environmental changes. This paper will also examine measurement performance with varying tracking rates and acquisition parameters.
Lee, Chia-Yen; Wang, Hao-Jen; Lai, Jhih-Hao; Chang, Yeun-Chung; Huang, Chiun-Sheng
2017-01-01
Long-term comparisons of infrared image can facilitate the assessment of breast cancer tissue growth and early tumor detection, in which longitudinal infrared image registration is a necessary step. However, it is hard to keep markers attached on a body surface for weeks, and rather difficult to detect anatomic fiducial markers and match them in the infrared image during registration process. The proposed study, automatic longitudinal infrared registration algorithm, develops an automatic vascular intersection detection method and establishes feature descriptors by shape context to achieve robust matching, as well as to obtain control points for the deformation model. In addition, competitive winner-guided mechanism is developed for optimal corresponding. The proposed algorithm is evaluated in two ways. Results show that the algorithm can quickly lead to accurate image registration and that the effectiveness is superior to manual registration with a mean error being 0.91 pixels. These findings demonstrate that the proposed registration algorithm is reasonably accurate and provide a novel method of extracting a greater amount of useful data from infrared images. PMID:28145474
Inhibitory Effects of Spices on Biogenic Amine Accumulation during Fish Sauce Fermentation.
Zhou, Xuxia; Qiu, Mengting; Zhao, Dandan; Lu, Fei; Ding, Yuting
2016-04-01
The presence of high levels of biogenic amines is detrimental to the quality and safety of fish sauce. This study investigated the effects of ethanol extracts of spices, including garlic, ginger, cinnamon, and star anise extracts, in reducing the accumulation of biogenic amines during fish sauce fermentation. The concentrations of biogenic amines, which include histamine, putrescine, tyramine, and spermidine, all increased during fish sauce fermentation. When compared with the samples without spices, the garlic and star anise extracts significantly reduced these increases. The greatest inhibitory effect was observed for the garlic ethanolic extracts. When compared with controls, the histamine, putrescine, tyramine, and spermidine contents and the overall biogenic amine levels of the garlic extract-treated samples were reduced by 30.49%, 17.65%, 26.03%, 37.20%, and 27.17%, respectively. The garlic, cinnamon, and star anise extracts showed significant inhibitory effects on aerobic bacteria counts. Furthermore, the garlic and star anise extracts showed antimicrobial activity against amine producers. These findings may be helpful for enhancing the safety of fish sauce. © 2016 Institute of Food Technologists®
NASA Astrophysics Data System (ADS)
Weiss, Jake; Newberg, Heidi Jo; Arsenault, Matthew; Bechtel, Torrin; Desell, Travis; Newby, Matthew; Thompson, Jeffery M.
2016-01-01
Statistical photometric parallax is a method for using the distribution of absolute magnitudes of stellar tracers to statistically recover the underlying density distribution of these tracers. In previous work, statistical photometric parallax was used to trace the Sagittarius Dwarf tidal stream, the so-called bifurcated piece of the Sagittaritus stream, and the Virgo Overdensity through the Milky Way. We use an improved knowledge of this distribution in a new algorithm that accounts for the changes in the stellar population of color-selected stars near the photometric limit of the Sloan Digital Sky Survey (SDSS). Although we select bluer main sequence turnoff stars (MSTO) as tracers, large color errors near the survey limit cause many stars to be scattered out of our selection box and many fainter, redder stars to be scattered into our selection box. We show that we are able to recover parameters for analogues of these streams in simulated data using a maximum likelihood optimization on MilkyWay@home. We also present the preliminary results of fitting the density distribution of major Milky Way tidal streams in SDSS data. This research is supported by generous gifts from the Marvin Clan, Babette Josephs, Manit Limlamai, and the MilkyWay@home volunteers.
Autopilot for frequency-modulation atomic force microscopy.
Kuchuk, Kfir; Schlesinger, Itai; Sivan, Uri
2015-10-01
One of the most challenging aspects of operating an atomic force microscope (AFM) is finding optimal feedback parameters. This statement applies particularly to frequency-modulation AFM (FM-AFM), which utilizes three feedback loops to control the cantilever excitation amplitude, cantilever excitation frequency, and z-piezo extension. These loops are regulated by a set of feedback parameters, tuned by the user to optimize stability, sensitivity, and noise in the imaging process. Optimization of these parameters is difficult due to the coupling between the frequency and z-piezo feedback loops by the non-linear tip-sample interaction. Four proportional-integral (PI) parameters and two lock-in parameters regulating these loops require simultaneous optimization in the presence of a varying unknown tip-sample coupling. Presently, this optimization is done manually in a tedious process of trial and error. Here, we report on the development and implementation of an algorithm that computes the control parameters automatically. The algorithm reads the unperturbed cantilever resonance frequency, its quality factor, and the z-piezo driving signal power spectral density. It analyzes the poles and zeros of the total closed loop transfer function, extracts the unknown tip-sample transfer function, and finds four PI parameters and two lock-in parameters for the frequency and z-piezo control loops that optimize the bandwidth and step response of the total system. Implementation of the algorithm in a home-built AFM shows that the calculated parameters are consistently excellent and rarely require further tweaking by the user. The new algorithm saves the precious time of experienced users, facilitates utilization of FM-AFM by casual users, and removes the main hurdle on the way to fully automated FM-AFM.
Autopilot for frequency-modulation atomic force microscopy
NASA Astrophysics Data System (ADS)
Kuchuk, Kfir; Schlesinger, Itai; Sivan, Uri
2015-10-01
One of the most challenging aspects of operating an atomic force microscope (AFM) is finding optimal feedback parameters. This statement applies particularly to frequency-modulation AFM (FM-AFM), which utilizes three feedback loops to control the cantilever excitation amplitude, cantilever excitation frequency, and z-piezo extension. These loops are regulated by a set of feedback parameters, tuned by the user to optimize stability, sensitivity, and noise in the imaging process. Optimization of these parameters is difficult due to the coupling between the frequency and z-piezo feedback loops by the non-linear tip-sample interaction. Four proportional-integral (PI) parameters and two lock-in parameters regulating these loops require simultaneous optimization in the presence of a varying unknown tip-sample coupling. Presently, this optimization is done manually in a tedious process of trial and error. Here, we report on the development and implementation of an algorithm that computes the control parameters automatically. The algorithm reads the unperturbed cantilever resonance frequency, its quality factor, and the z-piezo driving signal power spectral density. It analyzes the poles and zeros of the total closed loop transfer function, extracts the unknown tip-sample transfer function, and finds four PI parameters and two lock-in parameters for the frequency and z-piezo control loops that optimize the bandwidth and step response of the total system. Implementation of the algorithm in a home-built AFM shows that the calculated parameters are consistently excellent and rarely require further tweaking by the user. The new algorithm saves the precious time of experienced users, facilitates utilization of FM-AFM by casual users, and removes the main hurdle on the way to fully automated FM-AFM.
Autopilot for frequency-modulation atomic force microscopy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kuchuk, Kfir; Schlesinger, Itai; Sivan, Uri, E-mail: phsivan@tx.technion.ac.il
2015-10-15
One of the most challenging aspects of operating an atomic force microscope (AFM) is finding optimal feedback parameters. This statement applies particularly to frequency-modulation AFM (FM-AFM), which utilizes three feedback loops to control the cantilever excitation amplitude, cantilever excitation frequency, and z-piezo extension. These loops are regulated by a set of feedback parameters, tuned by the user to optimize stability, sensitivity, and noise in the imaging process. Optimization of these parameters is difficult due to the coupling between the frequency and z-piezo feedback loops by the non-linear tip-sample interaction. Four proportional-integral (PI) parameters and two lock-in parameters regulating these loopsmore » require simultaneous optimization in the presence of a varying unknown tip-sample coupling. Presently, this optimization is done manually in a tedious process of trial and error. Here, we report on the development and implementation of an algorithm that computes the control parameters automatically. The algorithm reads the unperturbed cantilever resonance frequency, its quality factor, and the z-piezo driving signal power spectral density. It analyzes the poles and zeros of the total closed loop transfer function, extracts the unknown tip-sample transfer function, and finds four PI parameters and two lock-in parameters for the frequency and z-piezo control loops that optimize the bandwidth and step response of the total system. Implementation of the algorithm in a home-built AFM shows that the calculated parameters are consistently excellent and rarely require further tweaking by the user. The new algorithm saves the precious time of experienced users, facilitates utilization of FM-AFM by casual users, and removes the main hurdle on the way to fully automated FM-AFM.« less
Optimisation multi-objectif des systemes energetiques
NASA Astrophysics Data System (ADS)
Dipama, Jean
The increasing demand of energy and the environmental concerns related to greenhouse gas emissions lead to more and more private or public utilities to turn to nuclear energy as an alternative for the future. Nuclear power plants are then called to experience large expansion in the coming years. Improved technologies will then be put in place to support the development of these plants. This thesis considers the optimization of the thermodynamic cycle of the secondary loop of Gentilly-2 nuclear power plant in terms of output power and thermal efficiency. In this thesis, investigations are carried out to determine the optimal operating conditions of steam power cycles by the judicious use of the combination of steam extraction at the different stages of the turbines. Whether it is the case of superheating or regeneration, we are confronted in all cases to an optimization problem involving two conflicting objectives, as increasing the efficiency imply the decrease of mechanical work and vice versa. Solving this kind of problem does not lead to unique solution, but to a set of solutions that are tradeoffs between the conflicting objectives. To search all of these solutions, called Pareto optimal solutions, the use of an appropriate optimization algorithm is required. Before starting the optimization of the secondary loop, we developed a thermodynamic model of the secondary loop which includes models for the main thermal components (e.g., turbine, moisture separator-superheater, condenser, feedwater heater and deaerator). This model is used to calculate the thermodynamic state of the steam and water at the different points of the installation. The thermodynamic model has been developed with Matlab and validated by comparing its predictions with the operating data provided by the engineers of the power plant. The optimizer developed in VBA (Visual Basic for Applications) uses an optimization algorithm based on the principle of genetic algorithms, a stochastic optimization method which is very robust and widely used to solve problems usually difficult to handle by traditional methods. Genetic algorithms (GAs) have been used in previous research and proved to be efficient in optimizing heat exchangers networks (HEN) (Dipama et al., 2008). So, HEN have been synthesized to recover the maximum heat in an industrial process. The optimization problem formulated in the context of this work consists of a single objective, namely the maximization of energy recovery. The optimization algorithm developed in this thesis extends the ability of GAs by taking into account several objectives simultaneously. This algorithm provides an innovation in the method of finding optimal solutions, by using a technique which consist of partitioning the solutions space in the form of parallel grids called "watching corridors". These corridors permit to specify areas (the observation corridors) in which the most promising feasible solutions are found and used to guide the search towards optimal solutions. A measure of the progress of the search is incorporated into the optimization algorithm to make it self-adaptive through the use of appropriate genetic operators at each stage of optimization process. The proposed method allows a fast convergence and ensure a diversity of solutions. Moreover, this method gives the algorithm the ability to overcome difficulties associated with optimizing problems with complex Pareto front landscapes (e.g., discontinuity, disjunction, etc.). The multi-objective optimization algorithm has been first validated using numerical test problems found in the literature as well as energy systems optimization problems. Finally, the proposed optimization algorithm has been applied for the optimization of the secondary loop of Gentilly-2 nuclear power plant, and a set of solutions have been found which permit to make the power plant operate in optimal conditions. (Abstract shortened by UMI.)
NASA Technical Reports Server (NTRS)
Solarna, David; Moser, Gabriele; Le Moigne-Stewart, Jacqueline; Serpico, Sebastiano B.
2017-01-01
Because of the large variety of sensors and spacecraft collecting data, planetary science needs to integrate various multi-sensor and multi-temporal images. These multiple data represent a precious asset, as they allow the study of targets spectral responses and of changes in the surface structure; because of their variety, they also require accurate and robust registration. A new crater detection algorithm, used to extract features that will be integrated in an image registration framework, is presented. A marked point process-based method has been developed to model the spatial distribution of elliptical objects (i.e. the craters) and a birth-death Markov chain Monte Carlo method, coupled with a region-based scheme aiming at computational efficiency, is used to find the optimal configuration fitting the image. The extracted features are exploited, together with a newly defined fitness function based on a modified Hausdorff distance, by an image registration algorithm whose architecture has been designed to minimize the computational time.
Asymptotic Cramer-Rao bounds for Morlet wavelet filter bank transforms of FM signals
NASA Astrophysics Data System (ADS)
Scheper, Richard
2002-03-01
Wavelet filter banks are potentially useful tools for analyzing and extracting information from frequency modulated (FM) signals in noise. Chief among the advantages of such filter banks is the tendency of wavelet transforms to concentrate signal energy while simultaneously dispersing noise energy over the time-frequency plane, thus raising the effective signal to noise ratio of filtered signals. Over the past decade, much effort has gone into devising new algorithms to extract the relevant information from transformed signals while identifying and discarding the transformed noise. Therefore, estimates of the ultimate performance bounds on such algorithms would serve as valuable benchmarks in the process of choosing optimal algorithms for given signal classes. Discussed here is the specific case of FM signals analyzed by Morlet wavelet filter banks. By making use of the stationary phase approximation of the Morlet transform, and assuming that the measured signals are well resolved digitally, the asymptotic form of the Fisher Information Matrix is derived. From this, Cramer-Rao bounds are analytically derived for simple cases.
HARPS-N OBSERVES THE SUN AS A STAR
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dumusque, Xavier; Glenday, Alex; Phillips, David F.
Radial velocity (RV) perturbations induced by stellar surface inhomogeneities including spots, plages and granules currently limit the detection of Earth-twins using Doppler spectroscopy. Such stellar noise is poorly understood for stars other than the Sun because their surface is unresolved. In particular, the effects of stellar surface inhomogeneities on observed stellar radial velocities are extremely difficult to characterize, and thus developing optimal correction techniques to extract true stellar radial velocities is extremely challenging. In this paper, we present preliminary results of a solar telescope built to feed full-disk sunlight into the HARPS-N spectrograph, which is in turn calibrated with anmore » astro-comb. This setup enables long-term observation of the Sun as a star with state-of-the-art sensitivity to RV changes. Over seven days of observing in 2014, we show an average 50 cm s{sup −1} RV rms over a few hours of observation. After correcting observed radial velocities for spot and plage perturbations using full-disk photometry of the Sun, we lower by a factor of two the weekly RV rms to 60 cm s{sup −1}. The solar telescope is now entering routine operation, and will observe the Sun every clear day for several hours. We will use these radial velocities combined with data from solar satellites to improve our understanding of stellar noise and develop optimal correction methods. If successful, these new methods should enable the detection of Venus over the next two to three years, thus demonstrating the possibility of detecting Earth-twins around other solar-like stars using the RV technique.« less
NASA Astrophysics Data System (ADS)
Bruynooghe, Michel M.
1998-04-01
In this paper, we present a robust method for automatic object detection and delineation in noisy complex images. The proposed procedure is a three stage process that integrates image segmentation by multidimensional pixel clustering and geometrically constrained optimization of deformable contours. The first step is to enhance the original image by nonlinear unsharp masking. The second step is to segment the enhanced image by multidimensional pixel clustering, using our reducible neighborhoods clustering algorithm that has a very interesting theoretical maximal complexity. Then, candidate objects are extracted and initially delineated by an optimized region merging algorithm, that is based on ascendant hierarchical clustering with contiguity constraints and on the maximization of average contour gradients. The third step is to optimize the delineation of previously extracted and initially delineated objects. Deformable object contours have been modeled by cubic splines. An affine invariant has been used to control the undesired formation of cusps and loops. Non linear constrained optimization has been used to maximize the external energy. This avoids the difficult and non reproducible choice of regularization parameters, that are required by classical snake models. The proposed method has been applied successfully to the detection of fine and subtle microcalcifications in X-ray mammographic images, to defect detection by moire image analysis, and to the analysis of microrugosities of thin metallic films. The later implementation of the proposed method on a digital signal processor associated to a vector coprocessor would allow the design of a real-time object detection and delineation system for applications in medical imaging and in industrial computer vision.
NASA Astrophysics Data System (ADS)
Khajeh, M.; Pourkarami, A.; Arefnejad, E.; Bohlooli, M.; Khatibi, A.; Ghaffari-Moghaddam, M.; Zareian-Jahromi, S.
2017-09-01
Chitosan-zinc oxide nanoparticles (CZPs) were developed for solid-phase extraction. Combined artificial neural network-ant colony optimization (ANN-ACO) was used for the simultaneous preconcentration and determination of lead (Pb2+) ions in water samples prior to graphite furnace atomic absorption spectrometry (GF AAS). The solution pH, mass of adsorbent CZPs, amount of 1-(2-pyridylazo)-2-naphthol (PAN), which was used as a complexing agent, eluent volume, eluent concentration, and flow rates of sample and eluent were used as input parameters of the ANN model, and the percentage of extracted Pb2+ ions was used as the output variable of the model. A multilayer perception network with a back-propagation learning algorithm was used to fit the experimental data. The optimum conditions were obtained based on the ACO. Under the optimized conditions, the limit of detection for Pb2+ ions was found to be 0.078 μg/L. This procedure was also successfully used to determine the amounts of Pb2+ ions in various natural water samples.
Khajeh, Mostafa; Sarafraz-Yazdi, Ali; Natavan, Zahra Bameri
2016-03-01
The aim of this research was to develop a low price and environmentally friendly adsorbent with abundant of source to remove methylene blue (MB) from water samples. Sawdust solid-phase extraction coupled with high-performance liquid chromatography was used for the extraction and determination of MB. In this study, an experimental data-based artificial neural network model is constructed to describe the performance of sawdust solid-phase extraction method for various operating conditions. The pH, time, amount of sawdust, and temperature were the input variables, while the percentage of extraction of MB was the output. The optimum operating condition was then determined by genetic algorithm method. The optimized conditions were obtained as follows: 11.5, 22.0 min, 0.3 g, and 26.0°C for pH of the solution, extraction time, amount of adsorbent, and temperature, respectively. Under these optimum conditions, the detection limit and relative standard deviation were 0.067 μg L(-1) and <2.4%, respectively. The Langmuir and Freundlich adsorption models were applied to describe the isotherm constant and for the removal and determination of MB from water samples. © The Author(s) 2013.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ruffio, Jean-Baptiste; Macintosh, Bruce; Nielsen, Eric L.
We present a new matched-filter algorithm for direct detection of point sources in the immediate vicinity of bright stars. The stellar point-spread function (PSF) is first subtracted using a Karhunen-Loéve image processing (KLIP) algorithm with angular and spectral differential imaging (ADI and SDI). The KLIP-induced distortion of the astrophysical signal is included in the matched-filter template by computing a forward model of the PSF at every position in the image. To optimize the performance of the algorithm, we conduct extensive planet injection and recovery tests and tune the exoplanet spectra template and KLIP reduction aggressiveness to maximize the signal-to-noise ratiomore » (S/N) of the recovered planets. We show that only two spectral templates are necessary to recover any young Jovian exoplanets with minimal S/N loss. We also developed a complete pipeline for the automated detection of point-source candidates, the calculation of receiver operating characteristics (ROC), contrast curves based on false positives, and completeness contours. We process in a uniform manner more than 330 data sets from the Gemini Planet Imager Exoplanet Survey and assess GPI typical sensitivity as a function of the star and the hypothetical companion spectral type. This work allows for the first time a comparison of different detection algorithms at a survey scale accounting for both planet completeness and false-positive rate. We show that the new forward model matched filter allows the detection of 50% fainter objects than a conventional cross-correlation technique with a Gaussian PSF template for the same false-positive rate.« less
Nezhadali, Azizollah; Motlagh, Maryam Omidvar; Sadeghzadeh, Samira
2018-02-05
A selective method based on molecularly imprinted polymer (MIP) solid-phase extraction (SPE) using UV-Vis spectrophotometry as a detection technique was developed for the determination of fluoxetine (FLU) in pharmaceutical and human serum samples. The MIPs were synthesized using pyrrole as a functional monomer in the presence of FLU as a template molecule. The factors that affecting the preparation and extraction ability of MIP such as amount of sorbent, initiator concentration, the amount of monomer to template ratio, uptake shaking rate, uptake time, washing buffer pH, take shaking rate, Taking time and polymerization time were considered for optimization. First a Plackett-Burman design (PBD) consists of 12 randomized runs were applied to determine the influence of each factor. The other optimization processes were performed using central composite design (CCD), artificial neural network (ANN) and genetic algorithm (GA). At optimal condition the calibration curve showed linearity over a concentration range of 10 -7 -10 -8 M with a correlation coefficient (R 2 ) of 0.9970. The limit of detection (LOD) for FLU was obtained 6.56×10 -9 M. The repeatability of the method was obtained 1.61%. The synthesized MIP sorbent showed a good selectivity and sensitivity toward FLU. The MIP/SPE method was used for the determination of FLU in pharmaceutical, serum and plasma samples, successfully. Copyright © 2017 Elsevier B.V. All rights reserved.
Design optimization of highly asymmetrical layouts by 2D contour metrology
NASA Astrophysics Data System (ADS)
Hu, C. M.; Lo, Fred; Yang, Elvis; Yang, T. H.; Chen, K. C.
2018-03-01
As design pitch shrinks to the resolution limit of up-to-date optical lithography technology, the Critical Dimension (CD) variation tolerance has been dramatically decreased for ensuring the functionality of device. One of critical challenges associates with the narrower CD tolerance for whole chip area is the proximity effect control on asymmetrical layout environments. To fulfill the tight CD control of complex features, the Critical Dimension Scanning Electron Microscope (CD-SEM) based measurement results for qualifying process window and establishing the Optical Proximity Correction (OPC) model become insufficient, thus 2D contour extraction technique [1-5] has been an increasingly important approach for complementing the insufficiencies of traditional CD measurement algorithm. To alleviate the long cycle time and high cost penalties for product verification, manufacturing requirements are better to be well handled at design stage to improve the quality and yield of ICs. In this work, in-house 2D contour extraction platform was established for layout design optimization of 39nm half-pitch Self-Aligned Double Patterning (SADP) process layer. Combining with the adoption of Process Variation Band Index (PVBI), the contour extraction platform enables layout optimization speedup as comparing to traditional methods. The capabilities of identifying and handling lithography hotspots in complex layout environments of 2D contour extraction platform allow process window aware layout optimization to meet the manufacturing requirements.
Narayanan, Shrikanth
2009-01-01
We describe a method for unsupervised region segmentation of an image using its spatial frequency domain representation. The algorithm was designed to process large sequences of real-time magnetic resonance (MR) images containing the 2-D midsagittal view of a human vocal tract airway. The segmentation algorithm uses an anatomically informed object model, whose fit to the observed image data is hierarchically optimized using a gradient descent procedure. The goal of the algorithm is to automatically extract the time-varying vocal tract outline and the position of the articulators to facilitate the study of the shaping of the vocal tract during speech production. PMID:19244005
Star tracker operation in a high density proton field
NASA Technical Reports Server (NTRS)
Miklus, Kenneth J.; Kissh, Frank; Flynn, David J.
1993-01-01
Algorithms that reject transient signals due to proton effects on charge coupled device (CCD) sensors have been implemented in the HDOS ASTRA-l Star Trackers to be flown on the TOPEX mission scheduled for launch in July 1992. A unique technique for simulating a proton-rich environment to test trackers is described, as well as the test results obtained. Solar flares or an orbit that passes through the South Atlantic Anomaly can subject the vehicle to very high proton flux levels. There are three ways in which spurious proton generated signals can impact tracker performance: the many false signals can prevent or extend the time to acquire a star; a proton-generated signal can compromise the accuracy of the star's reported magnitude and position; and the tracked star can be lost, requiring reacquisition. Tests simulating a proton-rich environment were performed on two ASTRA-1 Star Trackers utilizing these new algorithms. There were no false acquisitions, no lost stars, and a significant reduction in reported position errors due to these improvements.
NASA Astrophysics Data System (ADS)
Basoglu, Burak; Halicioglu, Kerem; Albayrak, Muge; Ulug, Rasit; Tevfik Ozludemir, M.; Deniz, Rasim
2017-04-01
In the last decade, the importance of high-precise geoid determination at local or national level has been pointed out by Turkish National Geodesy Commission. The Commission has also put objective of modernization of national height system of Turkey to the agenda. Meanwhile several projects have been realized in recent years. In Istanbul city, a GNSS/Levelling geoid was defined in 2005 for the metropolitan area of the city with an accuracy of ±3.5cm. In order to achieve a better accuracy in this area, "Local Geoid Determination with Integration of GNSS/Levelling and Astro-Geodetic Data" project has been conducted in Istanbul Technical University and Bogazici University KOERI since January 2016. The project is funded by The Scientific and Technological Research Council of Turkey. With the scope of the project, modernization studies of Digital Zenith Camera System are being carried on in terms of hardware components and software development. Accentuated subjects are the star catalogues, and centroiding algorithm used to identify the stars on the zenithal star field. During the test observations of Digital Zenith Camera System performed between 2013-2016, final results were calculated using the PSF method for star centroiding, and the second USNO CCD Astrograph Catalogue (UCAC2) for the reference star positions. This study aims to investigate the position accuracy of the star images by comparing different centroiding algorithms and available star catalogs used in astro-geodetic observations conducted with the digital zenith camera system.
Overlay improvements using a real time machine learning algorithm
NASA Astrophysics Data System (ADS)
Schmitt-Weaver, Emil; Kubis, Michael; Henke, Wolfgang; Slotboom, Daan; Hoogenboom, Tom; Mulkens, Jan; Coogans, Martyn; ten Berge, Peter; Verkleij, Dick; van de Mast, Frank
2014-04-01
While semiconductor manufacturing is moving towards the 14nm node using immersion lithography, the overlay requirements are tightened to below 5nm. Next to improvements in the immersion scanner platform, enhancements in the overlay optimization and process control are needed to enable these low overlay numbers. Whereas conventional overlay control methods address wafer and lot variation autonomously with wafer pre exposure alignment metrology and post exposure overlay metrology, we see a need to reduce these variations by correlating more of the TWINSCAN system's sensor data directly to the post exposure YieldStar metrology in time. In this paper we will present the results of a study on applying a real time control algorithm based on machine learning technology. Machine learning methods use context and TWINSCAN system sensor data paired with post exposure YieldStar metrology to recognize generic behavior and train the control system to anticipate on this generic behavior. Specific for this study, the data concerns immersion scanner context, sensor data and on-wafer measured overlay data. By making the link between the scanner data and the wafer data we are able to establish a real time relationship. The result is an inline controller that accounts for small changes in scanner hardware performance in time while picking up subtle lot to lot and wafer to wafer deviations introduced by wafer processing.
The MiMeS survey of Magnetism in Massive Stars: magnetic analysis of the O-type stars
NASA Astrophysics Data System (ADS)
Grunhut, J. H.; Wade, G. A.; Neiner, C.; Oksala, M. E.; Petit, V.; Alecian, E.; Bohlender, D. A.; Bouret, J.-C.; Henrichs, H. F.; Hussain, G. A. J.; Kochukhov, O.; MiMeS Collaboration
2017-02-01
We present the analysis performed on spectropolarimetric data of 97 O-type targets included in the framework of the Magnetism in Massive Stars (MiMeS) Survey. Mean least-squares deconvolved Stokes I and V line profiles were extracted for each observation, from which we measured the radial velocity, rotational and non-rotational broadening velocities, and longitudinal magnetic field Bℓ. The investigation of the Stokes I profiles led to the discovery of two new multiline spectroscopic systems (HD 46106, HD 204827) and confirmed the presence of a suspected companion in HD 37041. We present a modified strategy of the least-squares deconvolution technique aimed at optimizing the detection of magnetic signatures while minimizing the detection of spurious signatures in Stokes V. Using this analysis, we confirm the detection of a magnetic field in six targets previously reported as magnetic by the MiMeS collaboration (HD 108, HD 47129A2, HD 57682, HD 148937, CPD-28 2561, and NGC 1624-2), as well as report the presence of signal in Stokes V in three new magnetic candidates (HD 36486, HD 162978, and HD 199579). Overall, we find a magnetic incidence rate of 7 ± 3 per cent, for 108 individual O stars (including all O-type components part of multiline systems), with a median uncertainty of the Bℓ measurements of about 50 G. An inspection of the data reveals no obvious biases affecting the incidence rate or the preference for detecting magnetic signatures in the magnetic stars. Similar to A- and B-type stars, we find no link between the stars' physical properties (e.g. Teff, mass, and age) and the presence of a magnetic field. However, the Of?p stars represent a distinct class of magnetic O-type stars.
Laboratory Verification of Occulter Contrast Performance and Formation Flight
NASA Astrophysics Data System (ADS)
Sirbu, Dan
2014-01-01
Direct imaging of an exo-Earth is a difficult technical challenge. First, the intensity ratio between the parent star and its dim, rocky planetary companion is expected to be ten billion times. Additionally, for a planetary companion in the habitable zone the angular separation to the star is very small, such that only nearby stars are feasible targets. An external occulter is a spacecraft that is flown in formation with the observing space telescope and blocks starlight prior to the entrance pupil. Its shape must be specially designed to control for diffraction and be tolerant of errors such as misalignment, manufacturing, and deformations. In this dissertation, we present laboratory results pertaining to the optical verification of the contrast performance of a scaled occulter and implementation of an algorithm for the alignment of the telescope in the shadow of the occulter. The experimental testbed is scaled from space dimensions to the laboratory by maintaining constant Fresnel numbers while preserving an identical diffraction integral. We present monochromatic results in the image plane showing contrast better than 10 orders of magnitude, consistent with the level required for imaging an Exo-earth, and obtained using an optimized occulter shape. We compare these results to a baseline case using a circular occulter and to the theoretical predictions. Additionally, we address the principal technical challenge in the formation flight problem through demonstration of an alignment algorithm that is based on out-of-band leaked light. Such leaked light can be used a map to estimate the location of the telescope in the shadow and perform fine alignment during science observations.
An automatic system to detect and extract texts in medical images for de-identification
NASA Astrophysics Data System (ADS)
Zhu, Yingxuan; Singh, P. D.; Siddiqui, Khan; Gillam, Michael
2010-03-01
Recently, there is an increasing need to share medical images for research purpose. In order to respect and preserve patient privacy, most of the medical images are de-identified with protected health information (PHI) before research sharing. Since manual de-identification is time-consuming and tedious, so an automatic de-identification system is necessary and helpful for the doctors to remove text from medical images. A lot of papers have been written about algorithms of text detection and extraction, however, little has been applied to de-identification of medical images. Since the de-identification system is designed for end-users, it should be effective, accurate and fast. This paper proposes an automatic system to detect and extract text from medical images for de-identification purposes, while keeping the anatomic structures intact. First, considering the text have a remarkable contrast with the background, a region variance based algorithm is used to detect the text regions. In post processing, geometric constraints are applied to the detected text regions to eliminate over-segmentation, e.g., lines and anatomic structures. After that, a region based level set method is used to extract text from the detected text regions. A GUI for the prototype application of the text detection and extraction system is implemented, which shows that our method can detect most of the text in the images. Experimental results validate that our method can detect and extract text in medical images with a 99% recall rate. Future research of this system includes algorithm improvement, performance evaluation, and computation optimization.
Using evolutionary computation to optimize an SVM used in detecting buried objects in FLIR imagery
NASA Astrophysics Data System (ADS)
Paino, Alex; Popescu, Mihail; Keller, James M.; Stone, Kevin
2013-06-01
In this paper we describe an approach for optimizing the parameters of a Support Vector Machine (SVM) as part of an algorithm used to detect buried objects in forward looking infrared (FLIR) imagery captured by a camera installed on a moving vehicle. The overall algorithm consists of a spot-finding procedure (to look for potential targets) followed by the extraction of several features from the neighborhood of each spot. The features include local binary pattern (LBP) and histogram of oriented gradients (HOG) as these are good at detecting texture classes. Finally, we project and sum each hit into UTM space along with its confidence value (obtained from the SVM), producing a confidence map for ROC analysis. In this work, we use an Evolutionary Computation Algorithm (ECA) to optimize various parameters involved in the system, such as the combination of features used, parameters on the Canny edge detector, the SVM kernel, and various HOG and LBP parameters. To validate our approach, we compare results obtained from an SVM using parameters obtained through our ECA technique with those previously selected by hand through several iterations of "guess and check".
Error field optimization in DIII-D using extremum seeking control
Lanctot, M. J.; Olofsson, K. E. J.; Capella, M.; ...
2016-06-03
A closed-loop error field control algorithm is implemented in the Plasma Control System of the DIII-D tokamak and used to identify optimal control currents during a single plasma discharge. The algorithm, based on established extremum seeking control theory, exploits the link in tokamaks between maximizing the toroidal angular momentum and minimizing deleterious non-axisymmetric magnetic fields. Slowly-rotating n = 1 fields (the dither), generated by external coils, are used to perturb the angular momentum, monitored in real-time using a charge-exchange spectroscopy diagnostic. Simple signal processing of the rotation measurements extracts information about the rotation gradient with respect to the control coilmore » currents. This information is used to converge the control coil currents to a point that maximizes the toroidal angular momentum. The technique is well-suited for multi-coil, multi-harmonic error field optimizations in disruption sensitive devices as it does not require triggering locked tearing modes or plasma current disruptions. Control simulations highlight the importance of the initial search direction on the rate of the convergence, and identify future algorithm upgrades that may allow more rapid convergence that projects to convergence times in ITER on the order of tens of seconds.« less
NASA Astrophysics Data System (ADS)
Wise, John
In the near future, next-generation telescopes, covering most of the electromagnetic spectrum, will provide a view into the very earliest stages of galaxy formation. To accurately interpret these future observations, accurate and high-resolution simulations of the first stars and galaxies are vital. This proposal is centered on the formation of the first galaxies in the Universe and their observational signatures in preparation for these future observatories. This proposal has two overall goals: 1. To simulate the formation and evolution of a statistically significant sample of galaxies during the first billion years of the Universe, including all relevant astrophysics while resolving individual molecular clouds, in various cosmological environments. These simulations will utilize a sophisticated physical model of star and black hole formation and feedback, including radiation transport and magnetic fields, which will lead to the most realistic and resolved predictions for the early universe; 2. To predict the observational features of the first galaxies throughout the electromagnetic spectrum, allowing for optimal extraction of galaxy and dark matter halo properties from their photometry, imaging, and spectra; The proposed research plan addresses a timely and relevant issue to theoretically prepare for the interpretation of future observations of the first galaxies in the Universe. A suite of adaptive mesh refinement simulations will be used to follow the formation and evolution of thousands of galaxies observable with the James Webb Space Telescope (JWST) that will be launched during the second year of this project. The simulations will have also tracked the formation and death of over 100,000 massive metal-free stars. Currently, there is a gap of two orders of magnitude in stellar mass between the smallest observed z > 6 galaxy and the largest simulated galaxy from "first principles", capturing its entire star formation history. This project will eliminate this gap between simulations and observations of the first galaxies, providing predictions for next-generation observations coming online throughout the next decade. The proposed activities present the graduate students involved in the project with opportunities to gain expertise in numerical algorithms, high performance computing, and software engineering. With this experience, the students will be in a powerful position to face the challenging job market. The computational tools produced by this project will be made freely available and incorporated into their respective frameworks to preserve their sustainability.
STAR Algorithm Integration Team - Facilitating operational algorithm development
NASA Astrophysics Data System (ADS)
Mikles, V. J.
2015-12-01
The NOAA/NESDIS Center for Satellite Research and Applications (STAR) provides technical support of the Joint Polar Satellite System (JPSS) algorithm development and integration tasks. Utilizing data from the S-NPP satellite, JPSS generates over thirty Environmental Data Records (EDRs) and Intermediate Products (IPs) spanning atmospheric, ocean, cryosphere, and land weather disciplines. The Algorithm Integration Team (AIT) brings technical expertise and support to product algorithms, specifically in testing and validating science algorithms in a pre-operational environment. The AIT verifies that new and updated algorithms function in the development environment, enforces established software development standards, and ensures that delivered packages are functional and complete. AIT facilitates the development of new JPSS-1 algorithms by implementing a review approach based on the Enterprise Product Lifecycle (EPL) process. Building on relationships established during the S-NPP algorithm development process and coordinating directly with science algorithm developers, the AIT has implemented structured reviews with self-contained document suites. The process has supported algorithm improvements for products such as ozone, active fire, vegetation index, and temperature and moisture profiles.
Variable Star Signature Classification using Slotted Symbolic Markov Modeling
NASA Astrophysics Data System (ADS)
Johnston, K. B.; Peter, A. M.
2017-01-01
With the advent of digital astronomy, new benefits and new challenges have been presented to the modern day astronomer. No longer can the astronomer rely on manual processing, instead the profession as a whole has begun to adopt more advanced computational means. This paper focuses on the construction and application of a novel time-domain signature extraction methodology and the development of a supporting supervised pattern classification algorithm for the identification of variable stars. A methodology for the reduction of stellar variable observations (time-domain data) into a novel feature space representation is introduced. The methodology presented will be referred to as Slotted Symbolic Markov Modeling (SSMM) and has a number of advantages which will be demonstrated to be beneficial; specifically to the supervised classification of stellar variables. It will be shown that the methodology outperformed a baseline standard methodology on a standardized set of stellar light curve data. The performance on a set of data derived from the LINEAR dataset will also be shown.
Variable Star Signature Classification using Slotted Symbolic Markov Modeling
NASA Astrophysics Data System (ADS)
Johnston, Kyle B.; Peter, Adrian M.
2016-01-01
With the advent of digital astronomy, new benefits and new challenges have been presented to the modern day astronomer. No longer can the astronomer rely on manual processing, instead the profession as a whole has begun to adopt more advanced computational means. Our research focuses on the construction and application of a novel time-domain signature extraction methodology and the development of a supporting supervised pattern classification algorithm for the identification of variable stars. A methodology for the reduction of stellar variable observations (time-domain data) into a novel feature space representation is introduced. The methodology presented will be referred to as Slotted Symbolic Markov Modeling (SSMM) and has a number of advantages which will be demonstrated to be beneficial; specifically to the supervised classification of stellar variables. It will be shown that the methodology outperformed a baseline standard methodology on a standardized set of stellar light curve data. The performance on a set of data derived from the LINEAR dataset will also be shown.
STAR-GALAXY CLASSIFICATION IN MULTI-BAND OPTICAL IMAGING
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fadely, Ross; Willman, Beth; Hogg, David W.
2012-11-20
Ground-based optical surveys such as PanSTARRS, DES, and LSST will produce large catalogs to limiting magnitudes of r {approx}> 24. Star-galaxy separation poses a major challenge to such surveys because galaxies-even very compact galaxies-outnumber halo stars at these depths. We investigate photometric classification techniques on stars and galaxies with intrinsic FWHM <0.2 arcsec. We consider unsupervised spectral energy distribution template fitting and supervised, data-driven support vector machines (SVMs). For template fitting, we use a maximum likelihood (ML) method and a new hierarchical Bayesian (HB) method, which learns the prior distribution of template probabilities from the data. SVM requires training datamore » to classify unknown sources; ML and HB do not. We consider (1) a best-case scenario (SVM{sub best}) where the training data are (unrealistically) a random sampling of the data in both signal-to-noise and demographics and (2) a more realistic scenario where training is done on higher signal-to-noise data (SVM{sub real}) at brighter apparent magnitudes. Testing with COSMOS ugriz data, we find that HB outperforms ML, delivering {approx}80% completeness, with purity of {approx}60%-90% for both stars and galaxies. We find that no algorithm delivers perfect performance and that studies of metal-poor main-sequence turnoff stars may be challenged by poor star-galaxy separation. Using the Receiver Operating Characteristic curve, we find a best-to-worst ranking of SVM{sub best}, HB, ML, and SVM{sub real}. We conclude, therefore, that a well-trained SVM will outperform template-fitting methods. However, a normally trained SVM performs worse. Thus, HB template fitting may prove to be the optimal classification method in future surveys.« less
Experimental Verification of Bayesian Planet Detection Algorithms with a Shaped Pupil Coronagraph
NASA Astrophysics Data System (ADS)
Savransky, D.; Groff, T. D.; Kasdin, N. J.
2010-10-01
We evaluate the feasibility of applying Bayesian detection techniques to discovering exoplanets using high contrast laboratory data with simulated planetary signals. Background images are generated at the Princeton High Contrast Imaging Lab (HCIL), with a coronagraphic system utilizing a shaped pupil and two deformable mirrors (DMs) in series. Estimates of the electric field at the science camera are used to correct for quasi-static speckle and produce symmetric high contrast dark regions in the image plane. Planetary signals are added in software, or via a physical star-planet simulator which adds a second off-axis point source before the coronagraph with a beam recombiner, calibrated to a fixed contrast level relative to the source. We produce a variety of images, with varying integration times and simulated planetary brightness. We then apply automated detection algorithms such as matched filtering to attempt to extract the planetary signals. This allows us to evaluate the efficiency of these techniques in detecting planets in a high noise regime and eliminating false positives, as well as to test existing algorithms for calculating the required integration times for these techniques to be applicable.
Microbial genotype-phenotype mapping by class association rule mining.
Tamura, Makio; D'haeseleer, Patrik
2008-07-01
Microbial phenotypes are typically due to the concerted action of multiple gene functions, yet the presence of each gene may have only a weak correlation with the observed phenotype. Hence, it may be more appropriate to examine co-occurrence between sets of genes and a phenotype (multiple-to-one) instead of pairwise relations between a single gene and the phenotype. Here, we propose an efficient class association rule mining algorithm, netCAR, in order to extract sets of COGs (clusters of orthologous groups of proteins) associated with a phenotype from COG phylogenetic profiles and a phenotype profile. netCAR takes into account the phylogenetic co-occurrence graph between COGs to restrict hypothesis space, and uses mutual information to evaluate the biconditional relation. We examined the mining capability of pairwise and multiple-to-one association by using netCAR to extract COGs relevant to six microbial phenotypes (aerobic, anaerobic, facultative, endospore, motility and Gram negative) from 11,969 unique COG profiles across 155 prokaryotic organisms. With the same level of false discovery rate, multiple-to-one association can extract about 10 times more relevant COGs than one-to-one association. We also reveal various topologies of association networks among COGs (modules) from extracted multiple-to-one correlation rules relevant with the six phenotypes; including a well-connected network for motility, a star-shaped network for aerobic and intermediate topologies for the other phenotypes. netCAR outperforms a standard CAR mining algorithm, CARapriori, while requiring several orders of magnitude less computational time for extracting 3-COG sets. Source code of the Java implementation is available as Supplementary Material at the Bioinformatics online website, or upon request to the author. Supplementary data are available at Bioinformatics online.
Intelligent Fault Diagnosis of HVCB with Feature Space Optimization-Based Random Forest
Ma, Suliang; Wu, Jianwen; Wang, Yuhao; Jia, Bowen; Jiang, Yuan
2018-01-01
Mechanical faults of high-voltage circuit breakers (HVCBs) always happen over long-term operation, so extracting the fault features and identifying the fault type have become a key issue for ensuring the security and reliability of power supply. Based on wavelet packet decomposition technology and random forest algorithm, an effective identification system was developed in this paper. First, compared with the incomplete description of Shannon entropy, the wavelet packet time-frequency energy rate (WTFER) was adopted as the input vector for the classifier model in the feature selection procedure. Then, a random forest classifier was used to diagnose the HVCB fault, assess the importance of the feature variable and optimize the feature space. Finally, the approach was verified based on actual HVCB vibration signals by considering six typical fault classes. The comparative experiment results show that the classification accuracy of the proposed method with the origin feature space reached 93.33% and reached up to 95.56% with optimized input feature vector of classifier. This indicates that feature optimization procedure is successful, and the proposed diagnosis algorithm has higher efficiency and robustness than traditional methods. PMID:29659548
A New Augmentation Based Algorithm for Extracting Maximal Chordal Subgraphs.
Bhowmick, Sanjukta; Chen, Tzu-Yi; Halappanavar, Mahantesh
2015-02-01
A graph is chordal if every cycle of length greater than three contains an edge between non-adjacent vertices. Chordal graphs are of interest both theoretically, since they admit polynomial time solutions to a range of NP-hard graph problems, and practically, since they arise in many applications including sparse linear algebra, computer vision, and computational biology. A maximal chordal subgraph is a chordal subgraph that is not a proper subgraph of any other chordal subgraph. Existing algorithms for computing maximal chordal subgraphs depend on dynamically ordering the vertices, which is an inherently sequential process and therefore limits the algorithms' parallelizability. In this paper we explore techniques to develop a scalable parallel algorithm for extracting a maximal chordal subgraph. We demonstrate that an earlier attempt at developing a parallel algorithm may induce a non-optimal vertex ordering and is therefore not guaranteed to terminate with a maximal chordal subgraph. We then give a new algorithm that first computes and then repeatedly augments a spanning chordal subgraph. After proving that the algorithm terminates with a maximal chordal subgraph, we then demonstrate that this algorithm is more amenable to parallelization and that the parallel version also terminates with a maximal chordal subgraph. That said, the complexity of the new algorithm is higher than that of the previous parallel algorithm, although the earlier algorithm computes a chordal subgraph which is not guaranteed to be maximal. We experimented with our augmentation-based algorithm on both synthetic and real-world graphs. We provide scalability results and also explore the effect of different choices for the initial spanning chordal subgraph on both the running time and on the number of edges in the maximal chordal subgraph.
Kesner, Adam Leon; Kuntner, Claudia
2010-10-01
Respiratory gating in PET is an approach used to minimize the negative effects of respiratory motion on spatial resolution. It is based on an initial determination of a patient's respiratory movements during a scan, typically using hardware based systems. In recent years, several fully automated databased algorithms have been presented for extracting a respiratory signal directly from PET data, providing a very practical strategy for implementing gating in the clinic. In this work, a new method is presented for extracting a respiratory signal from raw PET sinogram data and compared to previously presented automated techniques. The acquisition of respiratory signal from PET data in the newly proposed method is based on rebinning the sinogram data into smaller data structures and then analyzing the time activity behavior in the elements of these structures. From this analysis, a 1D respiratory trace is produced, analogous to a hardware derived respiratory trace. To assess the accuracy of this fully automated method, respiratory signal was extracted from a collection of 22 clinical FDG-PET scans using this method, and compared to signal derived from several other software based methods as well as a signal derived from a hardware system. The method presented required approximately 9 min of processing time for each 10 min scan (using a single 2.67 GHz processor), which in theory can be accomplished while the scan is being acquired and therefore allowing a real-time respiratory signal acquisition. Using the mean correlation between the software based and hardware based respiratory traces, the optimal parameters were determined for the presented algorithm. The mean/median/range of correlations for the set of scans when using the optimal parameters was found to be 0.58/0.68/0.07-0.86. The speed of this method was within the range of real-time while the accuracy surpassed the most accurate of the previously presented algorithms. PET data inherently contains information about patient motion; information that is not currently being utilized. We have shown that a respiratory signal can be extracted from raw PET data in potentially real-time and in a fully automated manner. This signal correlates well with hardware based signal for a large percentage of scans, and avoids the efforts and complications associated with hardware. The proposed method to extract a respiratory signal can be implemented on existing scanners and, if properly integrated, can be applied without changes to routine clinical procedures.
Uppal, Karan; Soltow, Quinlyn A; Strobel, Frederick H; Pittard, W Stephen; Gernert, Kim M; Yu, Tianwei; Jones, Dean P
2013-01-16
Detection of low abundance metabolites is important for de novo mapping of metabolic pathways related to diet, microbiome or environmental exposures. Multiple algorithms are available to extract m/z features from liquid chromatography-mass spectral data in a conservative manner, which tends to preclude detection of low abundance chemicals and chemicals found in small subsets of samples. The present study provides software to enhance such algorithms for feature detection, quality assessment, and annotation. xMSanalyzer is a set of utilities for automated processing of metabolomics data. The utilites can be classified into four main modules to: 1) improve feature detection for replicate analyses by systematic re-extraction with multiple parameter settings and data merger to optimize the balance between sensitivity and reliability, 2) evaluate sample quality and feature consistency, 3) detect feature overlap between datasets, and 4) characterize high-resolution m/z matches to small molecule metabolites and biological pathways using multiple chemical databases. The package was tested with plasma samples and shown to more than double the number of features extracted while improving quantitative reliability of detection. MS/MS analysis of a random subset of peaks that were exclusively detected using xMSanalyzer confirmed that the optimization scheme improves detection of real metabolites. xMSanalyzer is a package of utilities for data extraction, quality control assessment, detection of overlapping and unique metabolites in multiple datasets, and batch annotation of metabolites. The program was designed to integrate with existing packages such as apLCMS and XCMS, but the framework can also be used to enhance data extraction for other LC/MS data software.
Research of centroiding algorithms for extended and elongated spot of sodium laser guide star
NASA Astrophysics Data System (ADS)
Shao, Yayun; Zhang, Yudong; Wei, Kai
2016-10-01
Laser guide stars (LGSs) increase the sky coverage of astronomical adaptive optics systems. But spot array obtained by Shack-Hartmann wave front sensors (WFSs) turns extended and elongated, due to the thickness and size limitation of sodium LGS, which affects the accuracy of the wave front reconstruction algorithm. In this paper, we compared three different centroiding algorithms , the Center-of-Gravity (CoG), weighted CoG (WCoG) and Intensity Weighted Centroid (IWC), as well as those accuracies for various extended and elongated spots. In addition, we compared the reconstructed image data from those three algorithms with theoretical results, and proved that WCoG and IWC are the best wave front reconstruction algorithms for extended and elongated spot among all the algorithms.
Reprocessing of Archival Direct Imaging Data of Herbig Ae/Be Stars
NASA Astrophysics Data System (ADS)
Safsten, Emily; Stephens, Denise C.
2017-01-01
Herbig Ae/Be (HAeBe) stars are intermediate mass (2-10 solar mass) pre-main sequence stars with circumstellar disks. They are the higher mass analogs of the better-known T Tauri stars. Observing planets within these young disks would greatly aid in understanding planet formation processes and timescales, particularly around massive stars. So far, only one planet, HD 100546b, has been confirmed to orbit a HAeBe star. With over 250 HAeBe stars known, and several observed to have disks with structures thought to be related to planet formation, it seems likely that there are as yet undiscovered planetary companions within the circumstellar disks of some of these young stars.Direct detection of a low-luminosity companion near a star requires high contrast imaging, often with the use of a coronagraph, and the subtraction of the central star's point spread function (PSF). Several processing algorithms have been developed in recent years to improve PSF subtraction and enhance the signal-to-noise of sources close to the central star. However, many HAeBe stars were observed via direct imaging before these algorithms came out. We present here current work with the PSF subtraction program PynPoint, which employs a method of principal component analysis, to reprocess archival images of HAeBe stars to increase the likelihood of detecting a planet in their disks.
Image Registration Algorithm Based on Parallax Constraint and Clustering Analysis
NASA Astrophysics Data System (ADS)
Wang, Zhe; Dong, Min; Mu, Xiaomin; Wang, Song
2018-01-01
To resolve the problem of slow computation speed and low matching accuracy in image registration, a new image registration algorithm based on parallax constraint and clustering analysis is proposed. Firstly, Harris corner detection algorithm is used to extract the feature points of two images. Secondly, use Normalized Cross Correlation (NCC) function to perform the approximate matching of feature points, and the initial feature pair is obtained. Then, according to the parallax constraint condition, the initial feature pair is preprocessed by K-means clustering algorithm, which is used to remove the feature point pairs with obvious errors in the approximate matching process. Finally, adopt Random Sample Consensus (RANSAC) algorithm to optimize the feature points to obtain the final feature point matching result, and the fast and accurate image registration is realized. The experimental results show that the image registration algorithm proposed in this paper can improve the accuracy of the image matching while ensuring the real-time performance of the algorithm.
Sundareshan, Malur K; Bhattacharjee, Supratik; Inampudi, Radhika; Pang, Ho-Yuen
2002-12-10
Computational complexity is a major impediment to the real-time implementation of image restoration and superresolution algorithms in many applications. Although powerful restoration algorithms have been developed within the past few years utilizing sophisticated mathematical machinery (based on statistical optimization and convex set theory), these algorithms are typically iterative in nature and require a sufficient number of iterations to be executed to achieve the desired resolution improvement that may be needed to meaningfully perform postprocessing image exploitation tasks in practice. Additionally, recent technological breakthroughs have facilitated novel sensor designs (focal plane arrays, for instance) that make it possible to capture megapixel imagery data at video frame rates. A major challenge in the processing of these large-format images is to complete the execution of the image processing steps within the frame capture times and to keep up with the output rate of the sensor so that all data captured by the sensor can be efficiently utilized. Consequently, development of novel methods that facilitate real-time implementation of image restoration and superresolution algorithms is of significant practical interest and is the primary focus of this study. The key to designing computationally efficient processing schemes lies in strategically introducing appropriate preprocessing steps together with the superresolution iterations to tailor optimized overall processing sequences for imagery data of specific formats. For substantiating this assertion, three distinct methods for tailoring a preprocessing filter and integrating it with the superresolution processing steps are outlined. These methods consist of a region-of-interest extraction scheme, a background-detail separation procedure, and a scene-derived information extraction step for implementing a set-theoretic restoration of the image that is less demanding in computation compared with the superresolution iterations. A quantitative evaluation of the performance of these algorithms for restoring and superresolving various imagery data captured by diffraction-limited sensing operations are also presented.
Dynamic imaging model and parameter optimization for a star tracker.
Yan, Jinyun; Jiang, Jie; Zhang, Guangjun
2016-03-21
Under dynamic conditions, star spots move across the image plane of a star tracker and form a smeared star image. This smearing effect increases errors in star position estimation and degrades attitude accuracy. First, an analytical energy distribution model of a smeared star spot is established based on a line segment spread function because the dynamic imaging process of a star tracker is equivalent to the static imaging process of linear light sources. The proposed model, which has a clear physical meaning, explicitly reflects the key parameters of the imaging process, including incident flux, exposure time, velocity of a star spot in an image plane, and Gaussian radius. Furthermore, an analytical expression of the centroiding error of the smeared star spot is derived using the proposed model. An accurate and comprehensive evaluation of centroiding accuracy is obtained based on the expression. Moreover, analytical solutions of the optimal parameters are derived to achieve the best performance in centroid estimation. Finally, we perform numerical simulations and a night sky experiment to validate the correctness of the dynamic imaging model, the centroiding error expression, and the optimal parameters.
Baharara, Javad; Amini, Elaheh
2015-01-01
Anti-cancer potential of marine natural products such as polysaccharides represented therapeutic potential in oncological researches. In this study, total polysaccharide from brittle star [Ophiocoma erinaceus (O. erinaceus)] was extracted and chemopreventive efficacy of Persian Gulf brittle star polysaccharide was investigated in HeLa human cervical cancer cells. To extract polysaccharide, dried brittle stars were ground and extracted mechanically. Then, detection of polysaccharide was performed by phenol sulfuric acid, Ultra Violet (UV)-sulfuric acid method and FTIR. The anti proliferative activity of isolated polysaccharide was examined by MTT assay and evaluation of cell death was done through morphological cell changes; Propodium Iodide staining, fluorescence microscopy and caspase-3, -9 enzymatic measurements. To assess its underlying mechanism, expression of Bax, Bcl-2 was evaluated. The polysaccharide detection methods demonstrated isolation of crude polysaccharide from Persian Gulf brittle star. The results revealed that O. erinaceus polysaccharide suppressed the proliferation of HeLa cells in a dose and time dependent manner. Morphological observation of DAPI and Acridine Orange/Propodium Iodide staining was documented by typical characteristics of apoptotic cell death. Flow cytometry analyses exhibited the accumulation of treated cells in sub-G1 region. Additionally, polysaccharide extracted induced intrinsic apoptosis via up-regulation of caspase-3, caspase-9 and Bax along with down-regulation of Bcl-2 in HeLa cells. Taken together, the apoptosis inducing effect of brittle star polysaccharide via intrinsic pathway confirmed the anti tumor potential of marine polysaccharide. Therefore, these findings proposed new insight into anti cancer properties of brittle star polysaccharide as a promising agent in cervical cancer treatment.
VizieR Online Data Catalog: Model SDSS colors for halo stars (Allende Prieto+, 2014)
NASA Astrophysics Data System (ADS)
Allende Prieto, C.; Fernandez-Alvar, E.; Schlesinger, K. J.; Lee, Y. S.; Morrison, H. L.; Schneider, D. P.; Beers, T. C.; Bizyaev, D.; Ebelke, G.; Malanushenko, E.; Oravetz, D.; Pan, K.; Simmons, A.; Simmerer, J.; Sobeck, J.; Robin, A. C.
2014-06-01
We analyze a sample of tens of thousands of spectra of halo turnoff stars, obtained with the optical spectrographs of the Sloan Digital Sky Survey (SDSS), to characterize the stellar halo population "in situ" out to a distance of a few tens of kpc from the Sun. In this paper we describe the derivation of atmospheric parameters. We also derive the overall stellar metallicity distribution based on F-type stars observed as flux calibrators for the Baryonic Oscillations Spectroscopic Survey (BOSS). Our analysis is based on an automated method that determines the set of parameters of a model atmosphere that best reproduces each observed spectrum. We use an optimization algorithm and evaluate model fluxes by means of interpolation in a pre-computed grid. In our analysis, we account for the spectrograph's varying resolution as a function of fiber and wavelength. Our results for early SDSS (pre-BOSS upgrade) data compare well with those from the SEGUE Stellar Parameter Pipeline (SSPP), except for stars at logg (cgs units) lower than 2.5. An analysis of stars in the globular cluster M13 reveals a dependence of the inferred metallicity on surface gravity for stars with logg<2.5, confirming the systematics identified in the comparison with the SSPP. We find that our metallicity estimates are significantly more precise than the SSPP results. We also find excellent agreement with several independent analyses. We show that the SDSS color criteria for selecting F-type halo turnoff stars as flux calibrators efficiently excludes stars with high metallicities, but does not significantly distort the shape of the metallicity distribution at low metallicity. We obtain a halo metallicity distribution that is narrower and more asymmetric than in previous studies. The lowest gravity stars in our sample, at tens of kpc from the Sun, indicate a shift of the metallicity distribution to lower abundances, consistent with that expected from a dual halo system in the Milky Way. (1 data file).
NASA Astrophysics Data System (ADS)
Tamiminia, Haifa; Homayouni, Saeid; McNairn, Heather; Safari, Abdoreza
2017-06-01
Polarimetric Synthetic Aperture Radar (PolSAR) data, thanks to their specific characteristics such as high resolution, weather and daylight independence, have become a valuable source of information for environment monitoring and management. The discrimination capability of observations acquired by these sensors can be used for land cover classification and mapping. The aim of this paper is to propose an optimized kernel-based C-means clustering algorithm for agriculture crop mapping from multi-temporal PolSAR data. Firstly, several polarimetric features are extracted from preprocessed data. These features are linear polarization intensities, and several statistical and physical based decompositions such as Cloude-Pottier, Freeman-Durden and Yamaguchi techniques. Then, the kernelized version of hard and fuzzy C-means clustering algorithms are applied to these polarimetric features in order to identify crop types. The kernel function, unlike the conventional partitioning clustering algorithms, simplifies the non-spherical and non-linearly patterns of data structure, to be clustered easily. In addition, in order to enhance the results, Particle Swarm Optimization (PSO) algorithm is used to tune the kernel parameters, cluster centers and to optimize features selection. The efficiency of this method was evaluated by using multi-temporal UAVSAR L-band images acquired over an agricultural area near Winnipeg, Manitoba, Canada, during June and July in 2012. The results demonstrate more accurate crop maps using the proposed method when compared to the classical approaches, (e.g. 12% improvement in general). In addition, when the optimization technique is used, greater improvement is observed in crop classification, e.g. 5% in overall. Furthermore, a strong relationship between Freeman-Durden volume scattering component, which is related to canopy structure, and phenological growth stages is observed.
Detection of the Spectrum of the Suspected Hot Subdwarf Companion to the Be Star 59 Cygni
NASA Astrophysics Data System (ADS)
Peters, Geraldine J.; Gies, D. R.; Pewett, T.; Touhami, Y.
2013-01-01
One method through which Be stars can acquire their circumstellar (CS) disks and large angular momentum is through binary mass transfer. We thus expect that some Be stars will have hot subdwarf companions, not visible in the optical region, that are the stripped down remnants of the mass donor. From the analysis of IUE HIRES spectra in the MAST Archive we confirm that the bright Be star 59 Cygni has an O subdwarf companion. About ten years ago Harmanec et al. (2002, A&A, 387, 580) and later Maintz et al. (2005, Pub.Astr.Inst.Cz, 93, 21) presented evidence for a binary system of this nature from optical spectra but the photospheric spectrum of the secondary was not detected. We find a spectral signature of the secondary by cross-correlating the IUE spectra with model spectra and confirm the period of 28.2 days reported by Harmanec et al. and Maintz et al. The individual spectra were extracted using a Doppler tomography algorithm. The hot subdwarf contributes only 4% of the light in the FUV and resembles the sdO star BD+75o325. We find the following primary/secondary parameters: Teff = 21.8 ± 0.7 and 52.1±4.8 kK, M = 6.3-9.4 and 0.62-0.91 Msun , and R = 5.8-7.0 and 0.36-0.43 Rsun . 59 Cygni joins φ Persei and FY Canis Majoris as the third bright Be star with a confirmed sdO companion. We are grateful for support from NASA/ADAP grant NNX10AD60G (GJP), NSF grant AST-1009080 (DRG) and the USC WiSE program (GJP) .
Afzali, Mahbubeh; Baharara, Javad; Nezhad Shahrokhabadi, Khadijeh; Amini, Elaheh
2017-01-01
Leukemia is a blood disease that creates from inhibition of differentiation and increased proliferation rate. The nature has been known as a rich source of medically useful substances. High diversity of bioactive molecules, extracted from marine invertebrates, makes them as ideal candidates for cancer research. The study has been done to investigate cytotoxic effects of dichloromethane brittle star extract and doxorubicin on EL4 cancer cells. Blood cancer EL4 cells were cultured and treated at different concentrations of brittle star (Ophiocoma erinaceus) dichloromethane extract at 24, 48 and 72 h. Cell toxicity was studied using MTT assay. Cell morphology was examined using an invert microscope. Further, apoptosis was examined using Annexin V-FITC, propodium iodide, DAPI, and Acridine orange/propodium iodide staining. Eventually, the apoptosis pathways were analyzed using measurement of Caspase-3 and -9 activity. The statistical analysis was performed using SPSS, ANOVA software, and Tukey’s test. P<0.05 was considered to be significant. MTT assay and morphological observations showed that dichloromethane extract can inhibit cell growth in a dose dependent. The results considered 32 µg/mL of the extract as IC50. Also, doxorubicin suppressed EL4 proliferation as IC50=32 µg/mL. All experiments related to apoptosis analysis confirmed that dichloromethane brittle star extract and doxorubicin have a cytotoxic effect on EL4 cells inIC50 concentration. The study showed that dichloromethane brittle star extract is as an adjunct to doxorubicin in treatment of leukemia cells. PMID:29844793
Afzali, Mahbubeh; Baharara, Javad; Nezhad Shahrokhabadi, Khadijeh; Amini, Elaheh
2017-01-01
Leukemia is a blood disease that creates from inhibition of differentiation and increased proliferation rate. The nature has been known as a rich source of medically useful substances. High diversity of bioactive molecules, extracted from marine invertebrates, makes them as ideal candidates for cancer research. The study has been done to investigate cytotoxic effects of dichloromethane brittle star extract and doxorubicin on EL4 cancer cells. Blood cancer EL4 cells were cultured and treated at different concentrations of brittle star ( Ophiocoma erinaceus ) dichloromethane extract at 24, 48 and 72 h. Cell toxicity was studied using MTT assay. Cell morphology was examined using an invert microscope. Further, apoptosis was examined using Annexin V-FITC, propodium iodide, DAPI, and Acridine orange/propodium iodide staining. Eventually, the apoptosis pathways were analyzed using measurement of Caspase-3 and -9 activity. The statistical analysis was performed using SPSS, ANOVA software, and Tukey's test. P <0.05 was considered to be significant. MTT assay and morphological observations showed that dichloromethane extract can inhibit cell growth in a dose dependent. The results considered 32 µg/mL of the extract as IC 50 . Also, doxorubicin suppressed EL4 proliferation as IC 50 =32 µg/mL. All experiments related to apoptosis analysis confirmed that dichloromethane brittle star extract and doxorubicin have a cytotoxic effect on EL4 cells inIC 50 concentration. The study showed that dichloromethane brittle star extract is as an adjunct to doxorubicin in treatment of leukemia cells.
Peters, Sanne A E; Dunford, Elizabeth; Jones, Alexandra; Ni Mhurchu, Cliona; Crino, Michelle; Taylor, Fraser; Woodward, Mark; Neal, Bruce
2017-07-05
The Health Star Rating (HSR) is an interpretive front-of-pack labelling system that rates the overall nutritional profile of packaged foods. The algorithm underpinning the HSR includes total sugar content as one of the components. This has been criticised because intrinsic sugars naturally present in dairy, fruits, and vegetables are treated the same as sugars added during food processing. We assessed whether the HSR could better discriminate between core and discretionary foods by including added sugar in the underlying algorithm. Nutrition information was extracted for 34,135 packaged foods available in The George Institute's Australian FoodSwitch database. Added sugar levels were imputed from food composition databases. Products were classified as 'core' or 'discretionary' based on the Australian Dietary Guidelines. The ability of each of the nutrients included in the HSR algorithm, as well as added sugar, to discriminate between core and discretionary foods was estimated using the area under the curve (AUC). 15,965 core and 18,350 discretionary foods were included. Of these, 8230 (52%) core foods and 15,947 (87%) discretionary foods contained added sugar. Median (Q1, Q3) HSRs were 4.0 (3.0, 4.5) for core foods and 2.0 (1.0, 3.0) for discretionary foods. Median added sugar contents (g/100 g) were 3.3 (1.5, 5.5) for core foods and 14.6 (1.8, 37.2) for discretionary foods. Of all the nutrients used in the current HSR algorithm, total sugar had the greatest individual capacity to discriminate between core and discretionary foods; AUC 0.692 (0.686; 0.697). Added sugar alone achieved an AUC of 0.777 (0.772; 0.782). A model with all nutrients in the current HSR algorithm had an AUC of 0.817 (0.812; 0.821), which increased to 0.871 (0.867; 0.874) with inclusion of added sugar. The HSR nutrients discriminate well between core and discretionary packaged foods. However, discrimination was improved when added sugar was also included. These data argue for inclusion of added sugar in an updated HSR algorithm and declaration of added sugar as part of mandatory nutrient declarations.
NASA Astrophysics Data System (ADS)
Gonzalez-Nicolas, A.; Cihan, A.; Birkholzer, J. T.; Petrusak, R.; Zhou, Q.; Riestenberg, D. E.; Trautz, R. C.; Godec, M.
2016-12-01
Industrial-scale injection of CO2 into the subsurface can cause reservoir pressure increases that must be properly controlled to prevent any potential environmental impact. Excessive pressure buildup in reservoir may result in ground water contamination stemming from leakage through conductive pathways, such as improperly plugged abandoned wells or distant faults, and the potential for fault reactivation and possibly seal breaching. Brine extraction is a viable approach for managing formation pressure, effective stress, and plume movement during industrial-scale CO2 injection projects. The main objectives of this study are to investigate suitable different pressure management strategies involving active brine extraction and passive pressure relief wells. Adaptive optimized management of CO2 storage projects utilizes the advanced automated optimization algorithms and suitable process models. The adaptive management integrates monitoring, forward modeling, inversion modeling and optimization through an iterative process. In this study, we employ an adaptive framework to understand primarily the effects of initial site characterization and frequency of the model update (calibration) and optimization calculations for controlling extraction rates based on the monitoring data on the accuracy and the success of the management without violating pressure buildup constraints in the subsurface reservoir system. We will present results of applying the adaptive framework to test appropriateness of different management strategies for a realistic field injection project.
An Interactive Image Segmentation Method in Hand Gesture Recognition
Chen, Disi; Li, Gongfa; Sun, Ying; Kong, Jianyi; Jiang, Guozhang; Tang, Heng; Ju, Zhaojie; Yu, Hui; Liu, Honghai
2017-01-01
In order to improve the recognition rate of hand gestures a new interactive image segmentation method for hand gesture recognition is presented, and popular methods, e.g., Graph cut, Random walker, Interactive image segmentation using geodesic star convexity, are studied in this article. The Gaussian Mixture Model was employed for image modelling and the iteration of Expectation Maximum algorithm learns the parameters of Gaussian Mixture Model. We apply a Gibbs random field to the image segmentation and minimize the Gibbs Energy using Min-cut theorem to find the optimal segmentation. The segmentation result of our method is tested on an image dataset and compared with other methods by estimating the region accuracy and boundary accuracy. Finally five kinds of hand gestures in different backgrounds are tested on our experimental platform, and the sparse representation algorithm is used, proving that the segmentation of hand gesture images helps to improve the recognition accuracy. PMID:28134818
Optimal configuration of power grid sources based on optimal particle swarm algorithm
NASA Astrophysics Data System (ADS)
Wen, Yuanhua
2018-04-01
In order to optimize the distribution problem of power grid sources, an optimized particle swarm optimization algorithm is proposed. First, the concept of multi-objective optimization and the Pareto solution set are enumerated. Then, the performance of the classical genetic algorithm, the classical particle swarm optimization algorithm and the improved particle swarm optimization algorithm are analyzed. The three algorithms are simulated respectively. Compared with the test results of each algorithm, the superiority of the algorithm in convergence and optimization performance is proved, which lays the foundation for subsequent micro-grid power optimization configuration solution.
Photometry of Standard Stars and Open Star Clusters
NASA Astrophysics Data System (ADS)
Jefferies, Amanda; Frinchaboy, Peter
2010-10-01
Photometric CCD observations of open star clusters and standard stars were carried out at the McDonald Observatory in Fort Davis, Texas. This data was analyzed using aperture photometry algorithms (DAOPHOT II and ALLSTAR) and the IRAF software package. Color-magnitude diagrams of these clusters were produced, showing the evolution of each cluster along the main sequence.
Abolghasemi, Mir Mahdi; Habibiyan, Rahim; Jaymand, Mehdi; Piryaei, Marzieh
2018-02-14
A nanostructured star-shaped polythiophene dendrimer was prepared and used as a fiber coating for headspace solid phase microextraction of selected triazolic pesticides (tebuconazole, hexaconazole, penconazole, diniconazole, difenoconazole, triticonazole) from water samples. The dendrimer with its large surface area was characterized by thermogravimetric analysis, UV-Vis spectroscopy and field emission scanning electron microscopy. It was placed on a stainless steel wire for use in SPME. The experimental conditions for fiber coating, extraction, stirring rate, ionic strength, pH value, desorption temperature and time were optimized. Following thermal desorption, the pesticides were quantified by GC-MS. Under optimum conditions, the repeatability (RSD) for one fiber (for n = 3) ranges from 4.3 to 5.6%. The detection limits are between 8 and 12 pg mL -1 . The method is fast, inexpensive (in terms of equipment), and the fiber has high thermal stability. Graphical abstract Schematic presentation of a nanostructured star-shaped polythiophene dendrimer for use in headspace solid phase microextraction of the triazolic pesticides (tebuconazole, hexaconazole, penconazole, diniconazole, difenoconazole, triticonazole). They were then quantified by gas chromatography-mass spectrometry.
A new augmentation based algorithm for extracting maximal chordal subgraphs
Bhowmick, Sanjukta; Chen, Tzu-Yi; Halappanavar, Mahantesh
2014-10-18
If every cycle of a graph is chordal length greater than three then it contains an edge between non-adjacent vertices. Chordal graphs are of interest both theoretically, since they admit polynomial time solutions to a range of NP-hard graph problems, and practically, since they arise in many applications including sparse linear algebra, computer vision, and computational biology. A maximal chordal subgraph is a chordal subgraph that is not a proper subgraph of any other chordal subgraph. Existing algorithms for computing maximal chordal subgraphs depend on dynamically ordering the vertices, which is an inherently sequential process and therefore limits the algorithms’more » parallelizability. In our paper we explore techniques to develop a scalable parallel algorithm for extracting a maximal chordal subgraph. We demonstrate that an earlier attempt at developing a parallel algorithm may induce a non-optimal vertex ordering and is therefore not guaranteed to terminate with a maximal chordal subgraph. We then give a new algorithm that first computes and then repeatedly augments a spanning chordal subgraph. After proving that the algorithm terminates with a maximal chordal subgraph, we then demonstrate that this algorithm is more amenable to parallelization and that the parallel version also terminates with a maximal chordal subgraph. That said, the complexity of the new algorithm is higher than that of the previous parallel algorithm, although the earlier algorithm computes a chordal subgraph which is not guaranteed to be maximal. Finally, we experimented with our augmentation-based algorithm on both synthetic and real-world graphs. We provide scalability results and also explore the effect of different choices for the initial spanning chordal subgraph on both the running time and on the number of edges in the maximal chordal subgraph.« less
Malikopoulos, Andreas
2015-01-01
The increasing urgency to extract additional efficiency from hybrid propulsion systems has led to the development of advanced power management control algorithms. In this paper we address the problem of online optimization of the supervisory power management control in parallel hybrid electric vehicles (HEVs). We model HEV operation as a controlled Markov chain and we show that the control policy yielding the Pareto optimal solution minimizes online the long-run expected average cost per unit time criterion. The effectiveness of the proposed solution is validated through simulation and compared to the solution derived with dynamic programming using the average cost criterion.more » Both solutions achieved the same cumulative fuel consumption demonstrating that the online Pareto control policy is an optimal control policy.« less
Imbalanced Learning for RR Lyrae Stars Based on SDSS and GALEX Databases
NASA Astrophysics Data System (ADS)
Zhang, Jingyi; Zhang, Yanxia; Zhao, Yongheng
2018-03-01
We apply machine learning and Convex-Hull algorithms to separate RR Lyrae stars from other stars like main-sequence stars, white dwarf stars, carbon stars, CVs, and carbon-lines stars, based on the Sloan Digital Sky Survey and Galaxy Evolution Explorer (GALEX). In low-dimensional spaces, the Convex-Hull algorithm is applied to select RR Lyrae stars. Given different input patterns of (u ‑ g, g ‑ r), (g ‑ r, r ‑ i), (r ‑ i, i ‑ z), (u ‑ g, g ‑ r, r ‑ i), (g ‑ r, r ‑ i, i ‑ z), (u ‑ g, g ‑ r, i ‑ z), and (u ‑ g, r ‑ i, i ‑ z), different convex hulls can be built for RR Lyrae stars. Comparing the performance of different input patterns, u ‑ g, g ‑ r, i ‑ z is the best input pattern. For this input pattern, the efficiency (the fraction of true RR Lyrae stars in the predicted RR Lyrae sample) is 4.2% with a completeness (the fraction of recovered RR Lyrae stars in the whole RR Lyrae sample) of 100%, increases to 9.9% with 97% completeness and to 16.1% with 53% completeness by removing some outliers. In high-dimensional spaces, machine learning algorithms are used with input patterns (u ‑ g, g ‑ r, r ‑ i, i ‑ z), (u ‑ g, g ‑ r, r ‑ i, i ‑ z, r), (NUV ‑ u, u ‑ g, g ‑ r, r ‑ i, i ‑ z), and (NUV ‑ u, u ‑ g, g ‑ r, r ‑ i, i ‑ z, r). RR Lyrae stars, which belong to the class of interest in our paper, are rare compared to other stars. For the highly imbalanced data, cost-sensitive Support Vector Machine, cost-sensitive Random Forest, and Fast Boxes is used. The results show that information from GALEX is helpful for identifying RR Lyrae stars, and Fast Boxes is the best performer on the skewed data in our case.
Lost in space: Onboard star identification using CCD star tracker data without an a priori attitude
NASA Technical Reports Server (NTRS)
Ketchum, Eleanor A.; Tolson, Robert H.
1993-01-01
There are many algorithms in use today which determine spacecraft attitude by identifying stars in the field of view of a star tracker. Some methods, which date from the early 1960's, compare the angular separation between observed stars with a small catalog. In the last 10 years, several methods have been developed which speed up the process and reduce the amount of memory needed, a key element to onboard attitude determination. However, each of these methods require some a priori knowledge of the spacecraft attitude. Although the Sun and magnetic field generally provide the necessary coarse attitude information, there are occasions when a spacecraft could get lost when it is not prudent to wait for sunlight. Also, the possibility of efficient attitude determination using only the highly accurate CCD star tracker could lead to fully autonomous spacecraft attitude determination. The need for redundant coarse sensors could thus be eliminated at substantial cost reduction. Some groups have extended their algorithms to implement a computation intense full sky scan. Some require large data bases. Both storage and speed are concerns for autonomous onboard systems. Neural network technology is even being explored by some as a possible solution, but because of the limited number of patterns that can be stored and large overhead, nothing concrete has resulted from these efforts. This paper presents an algorithm which, by descretizing the sky and filtering by visual magnitude of the brightness observed star, speeds up the lost in space star identification process while reducing the amount of necessary onboard computer storage compared to existing techniques.
Modeling Self-subtraction in Angular Differential Imaging: Application to the HD 32297 Debris Disk
NASA Astrophysics Data System (ADS)
Esposito, Thomas M.; Fitzgerald, Michael P.; Graham, James R.; Kalas, Paul
2014-01-01
We present a new technique for forward-modeling self-subtraction of spatially extended emission in observations processed with angular differential imaging (ADI) algorithms. High-contrast direct imaging of circumstellar disks is limited by quasi-static speckle noise, and ADI is commonly used to suppress those speckles. However, the application of ADI can result in self-subtraction of the disk signal due to the disk's finite spatial extent. This signal attenuation varies with radial separation and biases measurements of the disk's surface brightness, thereby compromising inferences regarding the physical processes responsible for the dust distribution. To compensate for this attenuation, we forward model the disk structure and compute the form of the self-subtraction function at each separation. As a proof of concept, we apply our method to 1.6 and 2.2 μm Keck adaptive optics NIRC2 scattered-light observations of the HD 32297 debris disk reduced using a variant of the "locally optimized combination of images" algorithm. We are able to recover disk surface brightness that was otherwise lost to self-subtraction and produce simplified models of the brightness distribution as it appears with and without self-subtraction. From the latter models, we extract radial profiles for the disk's brightness, width, midplane position, and color that are unbiased by self-subtraction. Our analysis of these measurements indicates a break in the brightness profile power law at r ≈ 110 AU and a disk width that increases with separation from the star. We also verify disk curvature that displaces the midplane by up to 30 AU toward the northwest relative to a straight fiducial midplane.
Adaptive Optics for the Thirty Meter Telescope
NASA Astrophysics Data System (ADS)
Ellerbroek, Brent
2013-12-01
This paper provides an overview of the progress made since the last AO4ELT conference towards developing the first-light AO architecture for the Thirty Meter Telescope (TMT). The Preliminary Design of the facility AO system NFIRAOS has been concluded by the Herzberg Institute of Astrophysics. Work on the client Infrared Imaging Spectrograph (IRIS) has progressed in parallel, including a successful Conceptual Design Review and prototyping of On-Instrument WFS (OIWFS) hardware. Progress on the design for the Laser Guide Star Facility (LGSF) continues at the Institute of Optics and Electronics in Chengdu, China, including the final acceptance of the Conceptual Design and modest revisions for the updated TMT telescope structure. Design and prototyping activities continue for lasers, wavefront sensing detectors, detector readout electronics, real-time control (RTC) processors, and deformable mirrors (DMs) with their associated drive electronics. Highlights include development of a prototype sum frequency guide star laser at the Technical Institute of Physics and Chemistry (Beijing); fabrication/test of prototype natural- and laser-guide star wavefront sensor CCDs for NFIRAOS by MIT Lincoln Laboratory and W.M. Keck Observatory; a trade study of RTC control algorithms and processors, with prototyping of GPU and FPGA architectures by TMT and the Dominion Radio Astrophysical Observatory; and fabrication/test of a 6x60 actuator DM prototype by CILAS. Work with the University of British Columbia LIDAR is continuing, in collaboration with ESO, to measure the spatial/temporal variability of the sodium layer and characterize the sodium coupling efficiency of several guide star laser systems. AO performance budgets have been further detailed. Modeling topics receiving particular attention include performance vs. computational cost tradeoffs for RTC algorithms; optimizing performance of the tip/tilt, plate scale, and sodium focus tracking loops controlled by the NGS on-instrument wavefront sensors, sky coverage, PSF reconstruction for LGS MCAO, and precision astrometry for the galactic center and other observations.
Optimizing low latency LIGO-Virgo localization
NASA Astrophysics Data System (ADS)
Chen, Hsin-Yu; Holz, Daniel
2015-04-01
Fast and effective localization of gravitational wave (GW) events could play a crucial role in identifying possible electromagnetic counterparts, and thereby help usher in an era of GW multi-messenger astronomy. We discuss an algorithm for accurate and very low latency (<< 1 second) localization of GW sources using only the time of arrival and signal-to-noise ratio at each detector. The algorithm is independent of distances, masses, and waveform templates of the sources to leading order, and applies to all discrete sources detected by ground-based detector networks. For the two detector configuration (LIGO Hanford+Livingston) expected in late 2015 we find a median 50% localization of 150 deg2 for binary neutron stars (for SNR threshold of 12), consistent with previous findings. We explore the improvement in localization resulting from high SNR events, finding that the loudest out of the first four events reduces the median sky localization area by a factor of 1.8. We also discuss some strategies to optimize electromagnetic follow-up of GW events. We specifically explore the case of multi-messenger joint detections coming from independent (and possibly highly uncertain) localizations, such as for short gamma-ray bursts observed by Fermi GBM and neutrinos captured by IceCube.
Bouridane, Ahmed; Ling, Bingo Wing-Kuen
2018-01-01
This paper presents an unsupervised learning algorithm for sparse nonnegative matrix factor time–frequency deconvolution with optimized fractional β-divergence. The β-divergence is a group of cost functions parametrized by a single parameter β. The Itakura–Saito divergence, Kullback–Leibler divergence and Least Square distance are special cases that correspond to β=0, 1, 2, respectively. This paper presents a generalized algorithm that uses a flexible range of β that includes fractional values. It describes a maximization–minimization (MM) algorithm leading to the development of a fast convergence multiplicative update algorithm with guaranteed convergence. The proposed model operates in the time–frequency domain and decomposes an information-bearing matrix into two-dimensional deconvolution of factor matrices that represent the spectral dictionary and temporal codes. The deconvolution process has been optimized to yield sparse temporal codes through maximizing the likelihood of the observations. The paper also presents a method to estimate the fractional β value. The method is demonstrated on separating audio mixtures recorded from a single channel. The paper shows that the extraction of the spectral dictionary and temporal codes is significantly more efficient by using the proposed algorithm and subsequently leads to better source separation performance. Experimental tests and comparisons with other factorization methods have been conducted to verify its efficacy. PMID:29702629
Microwave-based medical diagnosis using particle swarm optimization algorithm
NASA Astrophysics Data System (ADS)
Modiri, Arezoo
This dissertation proposes and investigates a novel architecture intended for microwave-based medical diagnosis (MBMD). Furthermore, this investigation proposes novel modifications of particle swarm optimization algorithm for achieving enhanced convergence performance. MBMD has been investigated through a variety of innovative techniques in the literature since the 1990's and has shown significant promise in early detection of some specific health threats. In comparison to the X-ray- and gamma-ray-based diagnostic tools, MBMD does not expose patients to ionizing radiation; and due to the maturity of microwave technology, it lends itself to miniaturization of the supporting systems. This modality has been shown to be effective in detecting breast malignancy, and hence, this study focuses on the same modality. A novel radiator device and detection technique is proposed and investigated in this dissertation. As expected, hardware design and implementation are of paramount importance in such a study, and a good deal of research, analysis, and evaluation has been done in this regard which will be reported in ensuing chapters of this dissertation. It is noteworthy that an important element of any detection system is the algorithm used for extracting signatures. Herein, the strong intrinsic potential of the swarm-intelligence-based algorithms in solving complicated electromagnetic problems is brought to bear. This task is accomplished through addressing both mathematical and electromagnetic problems. These problems are called benchmark problems throughout this dissertation, since they have known answers. After evaluating the performance of the algorithm for the chosen benchmark problems, the algorithm is applied to MBMD tumor detection problem. The chosen benchmark problems have already been tackled by solution techniques other than particle swarm optimization (PSO) algorithm, the results of which can be found in the literature. However, due to the relatively high level of complexity and randomness inherent to the selection of electromagnetic benchmark problems, a trend to resort to oversimplification in order to arrive at reasonable solutions has been taken in literature when utilizing analytical techniques. Here, an attempt has been made to avoid oversimplification when using the proposed swarm-based optimization algorithms.
Bilevel thresholding of sliced image of sludge floc.
Chu, C P; Lee, D J
2004-02-15
This work examined the feasibility of employing various thresholding algorithms to determining the optimal bilevel thresholding value for estimating the geometric parameters of sludge flocs from the microtome sliced images and from the confocal laser scanning microscope images. Morphological information extracted from images depends on the bilevel thresholding value. According to the evaluation on the luminescence-inverted images and fractal curves (quadric Koch curve and Sierpinski carpet), Otsu's method yields more stable performance than other histogram-based algorithms and is chosen to obtain the porosity. The maximum convex perimeter method, however, can probe the shapes and spatial distribution of the pores among the biomass granules in real sludge flocs. A combined algorithm is recommended for probing the sludge floc structure.
Data Mining and Optimization Tools for Developing Engine Parameters Tools
NASA Technical Reports Server (NTRS)
Dhawan, Atam P.
1998-01-01
This project was awarded for understanding the problem and developing a plan for Data Mining tools for use in designing and implementing an Engine Condition Monitoring System. From the total budget of $5,000, Tricia and I studied the problem domain for developing ail Engine Condition Monitoring system using the sparse and non-standardized datasets to be available through a consortium at NASA Lewis Research Center. We visited NASA three times to discuss additional issues related to dataset which was not made available to us. We discussed and developed a general framework of data mining and optimization tools to extract useful information from sparse and non-standard datasets. These discussions lead to the training of Tricia Erhardt to develop Genetic Algorithm based search programs which were written in C++ and used to demonstrate the capability of GA algorithm in searching an optimal solution in noisy datasets. From the study and discussion with NASA LERC personnel, we then prepared a proposal, which is being submitted to NASA for future work for the development of data mining algorithms for engine conditional monitoring. The proposed set of algorithm uses wavelet processing for creating multi-resolution pyramid of the data for GA based multi-resolution optimal search. Wavelet processing is proposed to create a coarse resolution representation of data providing two advantages in GA based search: 1. We will have less data to begin with to make search sub-spaces. 2. It will have robustness against the noise because at every level of wavelet based decomposition, we will be decomposing the signal into low pass and high pass filters.
A Swarm Optimization approach for clinical knowledge mining.
Christopher, J Jabez; Nehemiah, H Khanna; Kannan, A
2015-10-01
Rule-based classification is a typical data mining task that is being used in several medical diagnosis and decision support systems. The rules stored in the rule base have an impact on classification efficiency. Rule sets that are extracted with data mining tools and techniques are optimized using heuristic or meta-heuristic approaches in order to improve the quality of the rule base. In this work, a meta-heuristic approach called Wind-driven Swarm Optimization (WSO) is used. The uniqueness of this work lies in the biological inspiration that underlies the algorithm. WSO uses Jval, a new metric, to evaluate the efficiency of a rule-based classifier. Rules are extracted from decision trees. WSO is used to obtain different permutations and combinations of rules whereby the optimal ruleset that satisfies the requirement of the developer is used for predicting the test data. The performance of various extensions of decision trees, namely, RIPPER, PART, FURIA and Decision Tables are analyzed. The efficiency of WSO is also compared with the traditional Particle Swarm Optimization. Experiments were carried out with six benchmark medical datasets. The traditional C4.5 algorithm yields 62.89% accuracy with 43 rules for liver disorders dataset where as WSO yields 64.60% with 19 rules. For Heart disease dataset, C4.5 is 68.64% accurate with 98 rules where as WSO is 77.8% accurate with 34 rules. The normalized standard deviation for accuracy of PSO and WSO are 0.5921 and 0.5846 respectively. WSO provides accurate and concise rulesets. PSO yields results similar to that of WSO but the novelty of WSO lies in its biological motivation and it is customization for rule base optimization. The trade-off between the prediction accuracy and the size of the rule base is optimized during the design and development of rule-based clinical decision support system. The efficiency of a decision support system relies on the content of the rule base and classification accuracy. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
Design of a backlighting structure for very large-area luminaries
NASA Astrophysics Data System (ADS)
Carraro, L.; Mäyrä, A.; Simonetta, M.; Benetti, G.; Tramonte, A.; Benedetti, M.; Randone, E. M.; Ylisaukko-Oja, A.; Keränen, K.; Facchinetti, T.; Giuliani, G.
2017-02-01
A novel approach for RGB semiconductor LED-based backlighting system is developed to satisfy the requirements of the Project LUMENTILE funded by the European Commission, whose scope is to develop a luminous electronic tile that is foreseen to be manufactured in millions of square meters each year. This unconventionally large-area surface of uniform, high-brightness illumination requires a specific optical design to keep a low production cost, while maintaining high optical extraction efficiency and a reduced thickness of the structure, as imposed by architectural design constraints. The proposed solution is based on a light-guiding layer to be illuminated by LEDs in edge configuration, or in a planar arrangement. The light guiding slab is finished with a reflective top interface and a diffusive or reflective bottom interface/layer. Patterning is used for both the top interface (punctual removal of reflection and generation of a light scattering centers) and for the bottom layer (using dark/bright printed pattern). Computer-based optimization algorithms based on ray-tracing are used to find optimal solutions in terms of uniformity of illumination of the top surface and overall light extraction efficiency. Through a closed-loop optimization process, that assesses the illumination uniformity of the top surface, the algorithm generates the desired optimized top and bottom patterns, depending on the number of LED sources used, their geometry, and the thickness of the guiding layer. Specific low-cost technologies to realize the patterning are discussed, with the goal of keeping the production cost of these very large-area luminaries below the value of 100$/sqm.
Matching CCD images to a stellar catalog using locality-sensitive hashing
NASA Astrophysics Data System (ADS)
Liu, Bo; Yu, Jia-Zong; Peng, Qing-Yu
2018-02-01
The usage of a subset of observed stars in a CCD image to find their corresponding matched stars in a stellar catalog is an important issue in astronomical research. Subgraph isomorphic-based algorithms are the most widely used methods in star catalog matching. When more subgraph features are provided, the CCD images are recognized better. However, when the navigation feature database is large, the method requires more time to match the observing model. To solve this problem, this study investigates further and improves subgraph isomorphic matching algorithms. We present an algorithm based on a locality-sensitive hashing technique, which allocates quadrilateral models in the navigation feature database into different hash buckets and reduces the search range to the bucket in which the observed quadrilateral model is located. Experimental results indicate the effectivity of our method.
Zhang, Yanjun; Liu, Wen-zhe; Fu, Xing-hu; Bi, Wei-hong
2016-02-01
Given that the traditional signal processing methods can not effectively distinguish the different vibration intrusion signal, a feature extraction and recognition method of the vibration information is proposed based on EMD-AWPP and HOSA-SVM, using for high precision signal recognition of distributed fiber optic intrusion detection system. When dealing with different types of vibration, the method firstly utilizes the adaptive wavelet processing algorithm based on empirical mode decomposition effect to reduce the abnormal value influence of sensing signal and improve the accuracy of signal feature extraction. Not only the low frequency part of the signal is decomposed, but also the high frequency part the details of the signal disposed better by time-frequency localization process. Secondly, it uses the bispectrum and bicoherence spectrum to accurately extract the feature vector which contains different types of intrusion vibration. Finally, based on the BPNN reference model, the recognition parameters of SVM after the implementation of the particle swarm optimization can distinguish signals of different intrusion vibration, which endows the identification model stronger adaptive and self-learning ability. It overcomes the shortcomings, such as easy to fall into local optimum. The simulation experiment results showed that this new method can effectively extract the feature vector of sensing information, eliminate the influence of random noise and reduce the effects of outliers for different types of invasion source. The predicted category identifies with the output category and the accurate rate of vibration identification can reach above 95%. So it is better than BPNN recognition algorithm and improves the accuracy of the information analysis effectively.
Liu, Ying; Ciliax, Brian J; Borges, Karin; Dasigi, Venu; Ram, Ashwin; Navathe, Shamkant B; Dingledine, Ray
2004-01-01
One of the key challenges of microarray studies is to derive biological insights from the unprecedented quatities of data on gene-expression patterns. Clustering genes by functional keyword association can provide direct information about the nature of the functional links among genes within the derived clusters. However, the quality of the keyword lists extracted from biomedical literature for each gene significantly affects the clustering results. We extracted keywords from MEDLINE that describes the most prominent functions of the genes, and used the resulting weights of the keywords as feature vectors for gene clustering. By analyzing the resulting cluster quality, we compared two keyword weighting schemes: normalized z-score and term frequency-inverse document frequency (TFIDF). The best combination of background comparison set, stop list and stemming algorithm was selected based on precision and recall metrics. In a test set of four known gene groups, a hierarchical algorithm correctly assigned 25 of 26 genes to the appropriate clusters based on keywords extracted by the TDFIDF weighting scheme, but only 23 og 26 with the z-score method. To evaluate the effectiveness of the weighting schemes for keyword extraction for gene clusters from microarray profiles, 44 yeast genes that are differentially expressed during the cell cycle were used as a second test set. Using established measures of cluster quality, the results produced from TFIDF-weighted keywords had higher purity, lower entropy, and higher mutual information than those produced from normalized z-score weighted keywords. The optimized algorithms should be useful for sorting genes from microarray lists into functionally discrete clusters.
Naghibi, Fereydoun; Delavar, Mahmoud Reza; Pijanowski, Bryan
2016-12-14
Cellular Automata (CA) is one of the most common techniques used to simulate the urbanization process. CA-based urban models use transition rules to deliver spatial patterns of urban growth and urban dynamics over time. Determining the optimum transition rules of the CA is a critical step because of the heterogeneity and nonlinearities existing among urban growth driving forces. Recently, new CA models integrated with optimization methods based on swarm intelligence algorithms were proposed to overcome this drawback. The Artificial Bee Colony (ABC) algorithm is an advanced meta-heuristic swarm intelligence-based algorithm. Here, we propose a novel CA-based urban change model that uses the ABC algorithm to extract optimum transition rules. We applied the proposed ABC-CA model to simulate future urban growth in Urmia (Iran) with multi-temporal Landsat images from 1997, 2006 and 2015. Validation of the simulation results was made through statistical methods such as overall accuracy, the figure of merit and total operating characteristics (TOC). Additionally, we calibrated the CA model by ant colony optimization (ACO) to assess the performance of our proposed model versus similar swarm intelligence algorithm methods. We showed that the overall accuracy and the figure of merit of the ABC-CA model are 90.1% and 51.7%, which are 2.9% and 8.8% higher than those of the ACO-CA model, respectively. Moreover, the allocation disagreement of the simulation results for the ABC-CA model is 9.9%, which is 2.9% less than that of the ACO-CA model. Finally, the ABC-CA model also outperforms the ACO-CA model with fewer quantity and allocation errors and slightly more hits.
Naghibi, Fereydoun; Delavar, Mahmoud Reza; Pijanowski, Bryan
2016-01-01
Cellular Automata (CA) is one of the most common techniques used to simulate the urbanization process. CA-based urban models use transition rules to deliver spatial patterns of urban growth and urban dynamics over time. Determining the optimum transition rules of the CA is a critical step because of the heterogeneity and nonlinearities existing among urban growth driving forces. Recently, new CA models integrated with optimization methods based on swarm intelligence algorithms were proposed to overcome this drawback. The Artificial Bee Colony (ABC) algorithm is an advanced meta-heuristic swarm intelligence-based algorithm. Here, we propose a novel CA-based urban change model that uses the ABC algorithm to extract optimum transition rules. We applied the proposed ABC-CA model to simulate future urban growth in Urmia (Iran) with multi-temporal Landsat images from 1997, 2006 and 2015. Validation of the simulation results was made through statistical methods such as overall accuracy, the figure of merit and total operating characteristics (TOC). Additionally, we calibrated the CA model by ant colony optimization (ACO) to assess the performance of our proposed model versus similar swarm intelligence algorithm methods. We showed that the overall accuracy and the figure of merit of the ABC-CA model are 90.1% and 51.7%, which are 2.9% and 8.8% higher than those of the ACO-CA model, respectively. Moreover, the allocation disagreement of the simulation results for the ABC-CA model is 9.9%, which is 2.9% less than that of the ACO-CA model. Finally, the ABC-CA model also outperforms the ACO-CA model with fewer quantity and allocation errors and slightly more hits. PMID:27983633
A Hybrid Method for Pancreas Extraction from CT Image Based on Level Set Methods
Tan, Hanqing; Fujita, Hiroshi
2013-01-01
This paper proposes a novel semiautomatic method to extract the pancreas from abdominal CT images. Traditional level set and region growing methods that request locating initial contour near the final boundary of object have problem of leakage to nearby tissues of pancreas region. The proposed method consists of a customized fast-marching level set method which generates an optimal initial pancreas region to solve the problem that the level set method is sensitive to the initial contour location and a modified distance regularized level set method which extracts accurate pancreas. The novelty in our method is the proper selection and combination of level set methods, furthermore an energy-decrement algorithm and an energy-tune algorithm are proposed to reduce the negative impact of bonding force caused by connected tissue whose intensity is similar with pancreas. As a result, our method overcomes the shortages of oversegmentation at weak boundary and can accurately extract pancreas from CT images. The proposed method is compared to other five state-of-the-art medical image segmentation methods based on a CT image dataset which contains abdominal images from 10 patients. The evaluated results demonstrate that our method outperforms other methods by achieving higher accuracy and making less false segmentation in pancreas extraction. PMID:24066016
QPO observations related to neutron star equations of state
NASA Astrophysics Data System (ADS)
Stuchlik, Zdenek; Urbanec, Martin; Török, Gabriel; Bakala, Pavel; Cermak, Petr
We apply a genetic algorithm method for selection of neutron star models relating them to the resonant models of the twin peak quasiperiodic oscillations observed in the X-ray neutron star binary systems. It was suggested that pairs of kilo-hertz peaks in the X-ray Fourier power density spectra of some neutron stars reflect a non-linear resonance between two modes of accretion disk oscillations. We investigate this concept for a specific neutron star source. Each neutron star model is characterized by the equation of state (EOS), rotation frequency Ω and central energy density ρc . These determine the spacetime structure governing geodesic motion and position dependent radial and vertical epicyclic oscillations related to the stable circular geodesics. Particular kinds of resonances (KR) between the oscillations with epicyclic frequencies, or the frequencies derived from them, can take place at special positions assigned ambiguously to the spacetime structure. The pairs of resonant eigenfrequencies relevant to those positions are therefore fully given by KR,ρc , Ω, EOS and can be compared to the observationally determined pairs of eigenfrequencies in order to eliminate the unsatisfactory sets (KR,ρc , Ω, EOS). For the elimination we use the advanced genetic algorithm. Genetic algorithm comes out from the method of natural selection when subjects with the best adaptation to assigned conditions have most chances to survive. The chosen genetic algorithm with sexual reproduction contains one chromosome with restricted lifetime, uniform crossing and genes of type 3/3/5. For encryption of physical description (KR,ρ, Ω, EOS) into chromosome we used Gray code. As a fitness function we use correspondence between the observed and calculated pairs of eigenfrequencies.
Neutron star equation of state and QPO observations
NASA Astrophysics Data System (ADS)
Urbanec, Martin; Stuchlík, Zdeněk; Török, Gabriel; Bakala, Pavel; Čermák, Petr
2007-12-01
Assuming a resonant origin of the twin peak quasiperiodic oscillations observed in the X-ray neutron star binary systems, we apply a genetic algorithm method for selection of neutron star models. It was suggested that pairs of kilohertz peaks in the X-ray Fourier power density spectra of some neutron stars reflect a non-linear resonance between two modes of accretion disk oscillations. We investigate this concept for a specific neutron star source. Each neutron star model is characterized by the equation of state (EOS), rotation frequency Ω and central energy density rho_{c}. These determine the spacetime structure governing geodesic motion and position dependent radial and vertical epicyclic oscillations related to the stable circular geodesics. Particular kinds of resonances (KR) between the oscillations with epicyclic frequencies, or the frequencies derived from them, can take place at special positions assigned ambiguously to the spacetime structure. The pairs of resonant eigenfrequencies relevant to those positions are therefore fully given by KR, rho_{c}, Ω, EOS and can be compared to the observationally determined pairs of eigenfrequencies in order to eliminate the unsatisfactory sets (KR, rho_{c}, Ω, EOS). For the elimination we use the advanced genetic algorithm. Genetic algorithm comes out from the method of natural selection when subjects with the best adaptation to assigned conditions have most chances to survive. The chosen genetic algorithm with sexual reproduction contains one chromosome with restricted lifetime, uniform crossing and genes of type 3/3/5. For encryption of physical description (KR, rho_{c}, Ω, EOS) into the chromosome we use the Gray code. As a fitness function we use correspondence between the observed and calculated pairs of eigenfrequencies.
Support Vector Machine-Based Endmember Extraction
DOE Office of Scientific and Technical Information (OSTI.GOV)
Filippi, Anthony M; Archibald, Richard K
Introduced in this paper is the utilization of Support Vector Machines (SVMs) to automatically perform endmember extraction from hyperspectral data. The strengths of SVM are exploited to provide a fast and accurate calculated representation of high-dimensional data sets that may consist of multiple distributions. Once this representation is computed, the number of distributions can be determined without prior knowledge. For each distribution, an optimal transform can be determined that preserves informational content while reducing the data dimensionality, and hence, the computational cost. Finally, endmember extraction for the whole data set is accomplished. Results indicate that this Support Vector Machine-Based Endmembermore » Extraction (SVM-BEE) algorithm has the capability of autonomously determining endmembers from multiple clusters with computational speed and accuracy, while maintaining a robust tolerance to noise.« less
NASA Astrophysics Data System (ADS)
Martínez-Galarza, Rafael; Protopapas, Pavlos; Smith, Howard A.; Morales, Esteban
2018-01-01
From an observational point of view, the early life of massive stars is difficult to understand partly because star formation occurs in crowded clusters where individual stars often appear blended together in the beams of infrared telescopes. This renders the characterization of the physical properties of young embedded clusters via spectral energy distribution (SED) fitting a challenging task. Of particular relevance for the testing of star formation models is the question of whether the claimed universality of the IMF (references) is reflected in an equally universal integrated galactic initial mass function (IGIMF) of stars. In other words, is the set of all stellar masses in the galaxy sampled from a single universal IMF, or does the distribution of masses depend on the environment, making the IGIMF different from the canonical IMF? If the latter is true, how different are the two? We present a infrared SED analysis of ~70 Spitzer-selected, low mass ($<100~\\rm{M}_{\\odot}$), galactic blended clusters. For all of the clusters we obtain the most probable individual SED of each member and derive their physical properties, effectively deblending the confused emission from individual YSOs. Our algorithm incorporates a combined probabilistic model of the blended SEDs and the unresolved images in the long-wavelength end. We find that our results are compatible with competitive accretion in the central regions of young clusters, with the most massive stars forming early on in the process and less massive stars forming about 1Myr later. We also find evidence for a relationship between the total stellar mass of the cluster and the mass of the most massive member that favors optimal sampling in the cluster and disfavors random sampling for the canonical IMF, implying that star formation is self-regulated, and that the mass of the most massive star in a cluster depends on the available resources. The method presented here is easily adapted to future observations of clustered regions of star formation with JWST and other high resolution facilities.
Chen, Po-Hao; Zafar, Hanna; Galperin-Aizenberg, Maya; Cook, Tessa
2018-04-01
A significant volume of medical data remains unstructured. Natural language processing (NLP) and machine learning (ML) techniques have shown to successfully extract insights from radiology reports. However, the codependent effects of NLP and ML in this context have not been well-studied. Between April 1, 2015 and November 1, 2016, 9418 cross-sectional abdomen/pelvis CT and MR examinations containing our internal structured reporting element for cancer were separated into four categories: Progression, Stable Disease, Improvement, or No Cancer. We combined each of three NLP techniques with five ML algorithms to predict the assigned label using the unstructured report text and compared the performance of each combination. The three NLP algorithms included term frequency-inverse document frequency (TF-IDF), term frequency weighting (TF), and 16-bit feature hashing. The ML algorithms included logistic regression (LR), random decision forest (RDF), one-vs-all support vector machine (SVM), one-vs-all Bayes point machine (BPM), and fully connected neural network (NN). The best-performing NLP model consisted of tokenized unigrams and bigrams with TF-IDF. Increasing N-gram length yielded little to no added benefit for most ML algorithms. With all parameters optimized, SVM had the best performance on the test dataset, with 90.6 average accuracy and F score of 0.813. The interplay between ML and NLP algorithms and their effect on interpretation accuracy is complex. The best accuracy is achieved when both algorithms are optimized concurrently.
Lee, Jae-Hong; Kim, Do-Hyung; Jeong, Seong-Nyum; Choi, Seong-Ho
2018-04-01
The aim of the current study was to develop a computer-assisted detection system based on a deep convolutional neural network (CNN) algorithm and to evaluate the potential usefulness and accuracy of this system for the diagnosis and prediction of periodontally compromised teeth (PCT). Combining pretrained deep CNN architecture and a self-trained network, periapical radiographic images were used to determine the optimal CNN algorithm and weights. The diagnostic and predictive accuracy, sensitivity, specificity, positive predictive value, negative predictive value, receiver operating characteristic (ROC) curve, area under the ROC curve, confusion matrix, and 95% confidence intervals (CIs) were calculated using our deep CNN algorithm, based on a Keras framework in Python. The periapical radiographic dataset was split into training (n=1,044), validation (n=348), and test (n=348) datasets. With the deep learning algorithm, the diagnostic accuracy for PCT was 81.0% for premolars and 76.7% for molars. Using 64 premolars and 64 molars that were clinically diagnosed as severe PCT, the accuracy of predicting extraction was 82.8% (95% CI, 70.1%-91.2%) for premolars and 73.4% (95% CI, 59.9%-84.0%) for molars. We demonstrated that the deep CNN algorithm was useful for assessing the diagnosis and predictability of PCT. Therefore, with further optimization of the PCT dataset and improvements in the algorithm, a computer-aided detection system can be expected to become an effective and efficient method of diagnosing and predicting PCT.
NASA Astrophysics Data System (ADS)
Gok, Gokhan; Mosna, Zbysek; Arikan, Feza; Arikan, Orhan; Erdem, Esra
2016-07-01
Ionospheric observation is essentially accomplished by specialized radar systems called ionosondes. The time delay between the transmitted and received signals versus frequency is measured by the ionosondes and the received signals are processed to generate ionogram plots, which show the time delay or reflection height of signals with respect to transmitted frequency. The critical frequencies of ionospheric layers and virtual heights, that provide useful information about ionospheric structurecan be extracted from ionograms . Ionograms also indicate the amount of variability or disturbances in the ionosphere. With special inversion algorithms and tomographical methods, electron density profiles can also be estimated from the ionograms. Although structural pictures of ionosphere in the vertical direction can be observed from ionosonde measurements, some errors may arise due to inaccuracies that arise from signal propagation, modeling, data processing and tomographic reconstruction algorithms. Recently IONOLAB group (www.ionolab.org) developed a new algorithm for effective and accurate extraction of ionospheric parameters and reconstruction of electron density profile from ionograms. The electron density reconstruction algorithm applies advanced optimization techniques to calculate parameters of any existing analytical function which defines electron density with respect to height using ionogram measurement data. The process of reconstructing electron density with respect to height is known as the ionogram scaling or true height analysis. IONOLAB-RAY algorithm is a tool to investigate the propagation path and parameters of HF wave in the ionosphere. The algorithm models the wave propagation using ray representation under geometrical optics approximation. In the algorithm , the structural ionospheric characteristics arerepresented as realistically as possible including anisotropicity, inhomogenity and time dependence in 3-D voxel structure. The algorithm is also used for various purposes including calculation of actual height and generation of ionograms. In this study, the performance of electron density reconstruction algorithm of IONOLAB group and standard electron density profile algorithms of ionosondes are compared with IONOLAB-RAY wave propagation simulation in near vertical incidence. The electron density reconstruction and parameter extraction algorithms of ionosondes are validated with the IONOLAB-RAY results both for quiet anddisturbed ionospheric states in Central Europe using ionosonde stations such as Pruhonice and Juliusruh . It is observed that IONOLAB ionosonde parameter extraction and electron density reconstruction algorithm performs significantly better compared to standard algorithms especially for disturbed ionospheric conditions. IONOLAB-RAY provides an efficient and reliable tool to investigate and validate ionosonde electron density reconstruction algorithms, especially in determination of reflection height (true height) of signals and critical parameters of ionosphere. This study is supported by TUBITAK 114E541, 115E915 and Joint TUBITAK 114E092 and AS CR 14/001 projects.
NASA Astrophysics Data System (ADS)
Andronov, I. L.
The biography of Vladimir Platonovich Tsesevich (11.11.1907 - 28.10.1983), a leader of the astronomy in Odessa from 1944 to 1983, is briefly reviewed, as well as the directions of study, mainly the highlights of the research of variable stars carried out by the members of the scientific school founded by him. The directions of these studies cover a very wide range of variability types - "magnetic" and "non-magnetic" cataclysmic variables, symbiotic, X-Ray and other interacting binaries, classical eclipsers and "extreme direct impactors", pulsating variables from DSct and RR through C and RV to SR and M. Improved algorithms and programs have been elaborated for statistically optimal phenomenological and physical modeling. Initially these studies in Odessa were inspired by ("with a capital letter") Vladimir Platonovich Tsesevich. who was a meticulous Scientist and brilliant Educator, thorough Author and the intelligibly explaining Popularizer, persevering Organizer and cheerful Joker - a true Professor and Teacher. He was the "Poet of the Starry Heavens".
Robust feature extraction for rapid classification of damage in composites
NASA Astrophysics Data System (ADS)
Coelho, Clyde K.; Reynolds, Whitney; Chattopadhyay, Aditi
2009-03-01
The ability to detect anomalies in signals from sensors is imperative for structural health monitoring (SHM) applications. Many of the candidate algorithms for these applications either require a lot of training examples or are very computationally inefficient for large sample sizes. The damage detection framework presented in this paper uses a combination of Linear Discriminant Analysis (LDA) along with Support Vector Machines (SVM) to obtain a computationally efficient classification scheme for rapid damage state determination. LDA was used for feature extraction of damage signals from piezoelectric sensors on a composite plate and these features were used to train the SVM algorithm in parts, reducing the computational intensity associated with the quadratic optimization problem that needs to be solved during training. SVM classifiers were organized into a binary tree structure to speed up classification, which also reduces the total training time required. This framework was validated on composite plates that were impacted at various locations. The results show that the algorithm was able to correctly predict the different impact damage cases in composite laminates using less than 21 percent of the total available training data after data reduction.
NASA Astrophysics Data System (ADS)
Wu, Kaizhi; Zhang, Xuming; Chen, Guangxie; Weng, Fei; Ding, Mingyue
2013-10-01
Images acquired in free breathing using contrast enhanced ultrasound exhibit a periodic motion that needs to be compensated for if a further accurate quantification of the hepatic perfusion analysis is to be executed. In this work, we present an algorithm to compensate the respiratory motion by effectively combining the PCA (Principal Component Analysis) method and block matching method. The respiratory kinetics of the ultrasound hepatic perfusion image sequences was firstly extracted using the PCA method. Then, the optimal phase of the obtained respiratory kinetics was detected after normalizing the motion amplitude and determining the image subsequences of the original image sequences. The image subsequences were registered by the block matching method using cross-correlation as the similarity. Finally, the motion-compensated contrast images can be acquired by using the position mapping and the algorithm was evaluated by comparing the TICs extracted from the original image sequences and compensated image subsequences. Quantitative comparisons demonstrated that the average fitting error estimated of ROIs (region of interest) was reduced from 10.9278 +/- 6.2756 to 5.1644 +/- 3.3431 after compensating.
NASA Astrophysics Data System (ADS)
Pinales, J. C.; Graber, H. C.; Hargrove, J. T.; Caruso, M. J.
2016-02-01
Previous studies have demonstrated the ability to detect and classify marine hydrocarbon films with spaceborne synthetic aperture radar (SAR) imagery. The dampening effects of hydrocarbon discharges on small surface capillary-gravity waves renders the ocean surface "radar dark" compared with the standard wind-borne ocean surfaces. Given the scope and impact of events like the Deepwater Horizon oil spill, the need for improved, automated and expedient monitoring of hydrocarbon-related marine anomalies has become a pressing and complex issue for governments and the extraction industry. The research presented here describes the development, training, and utilization of an algorithm that detects marine oil spills in an automated, semi-supervised manner, utilizing X-, C-, or L-band SAR data as the primary input. Ancillary datasets include related radar-borne variables (incidence angle, etc.), environmental data (wind speed, etc.) and textural descriptors. Shapefiles produced by an experienced human-analyst served as targets (validation) during the training portion of the investigation. Training and testing datasets were chosen for development and assessment of algorithm effectiveness as well as optimal conditions for oil detection in SAR data. The algorithm detects oil spills by following a 3-step methodology: object detection, feature extraction, and classification. Previous oil spill detection and classification methodologies such as machine learning algorithms, artificial neural networks (ANN), and multivariate classification methods like partial least squares-discriminant analysis (PLS-DA) are evaluated and compared. Statistical, transform, and model-based image texture techniques, commonly used for object mapping directly or as inputs for more complex methodologies, are explored to determine optimal textures for an oil spill detection system. The influence of the ancillary variables is explored, with a particular focus on the role of strong vs. weak wind forcing.
Fourier domain preconditioned conjugate gradient algorithm for atmospheric tomography.
Yang, Qiang; Vogel, Curtis R; Ellerbroek, Brent L
2006-07-20
By 'atmospheric tomography' we mean the estimation of a layered atmospheric turbulence profile from measurements of the pupil-plane phase (or phase gradients) corresponding to several different guide star directions. We introduce what we believe to be a new Fourier domain preconditioned conjugate gradient (FD-PCG) algorithm for atmospheric tomography, and we compare its performance against an existing multigrid preconditioned conjugate gradient (MG-PCG) approach. Numerical results indicate that on conventional serial computers, FD-PCG is as accurate and robust as MG-PCG, but it is from one to two orders of magnitude faster for atmospheric tomography on 30 m class telescopes. Simulations are carried out for both natural guide stars and for a combination of finite-altitude laser guide stars and natural guide stars to resolve tip-tilt uncertainty.
A novel double fine guide sensor design on space telescope
NASA Astrophysics Data System (ADS)
Zhang, Xu-xu; Yin, Da-yi
2018-02-01
To get high precision attitude for space telescope, a double marginal FOV (field of view) FGS (Fine Guide Sensor) is proposed. It is composed of two large area APS CMOS sensors and both share the same lens in main light of sight. More star vectors can be get by two FGS and be used for high precision attitude determination. To improve star identification speed, the vector cross product in inter-star angles for small marginal FOV different from traditional way is elaborated and parallel processing method is applied to pyramid algorithm. The star vectors from two sensors are then used to attitude fusion with traditional QUEST algorithm. The simulation results show that the system can get high accuracy three axis attitudes and the scheme is feasibility.
Systematics-insensitive Periodic Signal Search with K2
NASA Astrophysics Data System (ADS)
Angus, Ruth; Foreman-Mackey, Daniel; Johnson, John A.
2016-02-01
From pulsating stars to transiting exoplanets, the search for periodic signals in K2 data, Kepler’s two-wheeled extension, is relevant to a long list of scientific goals. Systematics affecting K2 light curves due to the decreased spacecraft pointing precision inhibit the easy extraction of periodic signals from the data. We here develop a method for producing periodograms of K2 light curves that are insensitive to pointing-induced systematics; the Systematics-insensitive Periodogram (SIP). Traditional sine-fitting periodograms use a generative model to find the frequency of a sinusoid that best describes the data. We extend this principle by including systematic trends, based on a set of “eigen light curves,” following Foreman-Mackey et al., in our generative model as well as a sum of sine and cosine functions over a grid of frequencies. Using this method we are able to produce periodograms with vastly reduced systematic features. The quality of the resulting periodograms are such that we can recover acoustic oscillations in giant stars and measure stellar rotation periods without the need for any detrending. The algorithm is also applicable to the detection of other periodic phenomena such as variable stars, eclipsing binaries and short-period exoplanet candidates. The SIP code is available at https://github.com/RuthAngus/SIPK2.
NASA Technical Reports Server (NTRS)
Ramirez, Daniel Perez; Lyamani, H.; Olmo, F. J.; Whiteman, D. N.; Navas-Guzman, F.; Alados-Arboledas, L.
2012-01-01
This paper presents the development and set up of a cloud screening and data quality control algorithm for a star photometer based on CCD camera as detector. These algorithms are necessary for passive remote sensing techniques to retrieve the columnar aerosol optical depth, delta Ae(lambda), and precipitable water vapor content, W, at nighttime. This cloud screening procedure consists of calculating moving averages of delta Ae() and W under different time-windows combined with a procedure for detecting outliers. Additionally, to avoid undesirable Ae(lambda) and W fluctuations caused by the atmospheric turbulence, the data are averaged on 30 min. The algorithm is applied to the star photometer deployed in the city of Granada (37.16 N, 3.60 W, 680 ma.s.l.; South-East of Spain) for the measurements acquired between March 2007 and September 2009. The algorithm is evaluated with correlative measurements registered by a lidar system and also with all-sky images obtained at the sunset and sunrise of the previous and following days. Promising results are obtained detecting cloud-affected data. Additionally, the cloud screening algorithm has been evaluated under different aerosol conditions including Saharan dust intrusion, biomass burning and pollution events.
NASA Astrophysics Data System (ADS)
Richards, Joseph W.; Starr, Dan L.; Brink, Henrik; Miller, Adam A.; Bloom, Joshua S.; Butler, Nathaniel R.; James, J. Berian; Long, James P.; Rice, John
2012-01-01
Despite the great promise of machine-learning algorithms to classify and predict astrophysical parameters for the vast numbers of astrophysical sources and transients observed in large-scale surveys, the peculiarities of the training data often manifest as strongly biased predictions on the data of interest. Typically, training sets are derived from historical surveys of brighter, more nearby objects than those from more extensive, deeper surveys (testing data). This sample selection bias can cause catastrophic errors in predictions on the testing data because (1) standard assumptions for machine-learned model selection procedures break down and (2) dense regions of testing space might be completely devoid of training data. We explore possible remedies to sample selection bias, including importance weighting, co-training, and active learning (AL). We argue that AL—where the data whose inclusion in the training set would most improve predictions on the testing set are queried for manual follow-up—is an effective approach and is appropriate for many astronomical applications. For a variable star classification problem on a well-studied set of stars from Hipparcos and Optical Gravitational Lensing Experiment, AL is the optimal method in terms of error rate on the testing data, beating the off-the-shelf classifier by 3.4% and the other proposed methods by at least 3.0%. To aid with manual labeling of variable stars, we developed a Web interface which allows for easy light curve visualization and querying of external databases. Finally, we apply AL to classify variable stars in the All Sky Automated Survey, finding dramatic improvement in our agreement with the ASAS Catalog of Variable Stars, from 65.5% to 79.5%, and a significant increase in the classifier's average confidence for the testing set, from 14.6% to 42.9%, after a few AL iterations.
NASA Astrophysics Data System (ADS)
Foy, Renaud; Éric, Pierre; Eysseric, Jérôme; Foy, Françoise; Fusco, Thierry; Girard, Julien; Le Van Suu, Auguste; Perruchot, Sandrine; Richaud, Pierre; Richaud, Yoann; Rondeau, Xavier; Tallon, Michel; Thiébaut, Éric; Boër, Michel
2007-09-01
The Polychromatic Laser Guide Star aims at providing for the tilt measurement from a LGS without any natural guide star. Thus it allows adaptive optics to provide us with a full sky coverage. This is critical in particular to extend adaptive optics to the visible range, where isoplanatism is so small that the probability is negligible to find a natural star to measure the tilt. We report new results obtained within the framework of the Polychromatic LGS programme ELP-OA. Natural stars have been used to mimic the PLGS, in order to check the feasibility of using the difference in the tilt at two wavelengths to derive the tilt itself. We report results from the ATTILA experiment obtained at the 1.52 m telescope at Observatoire de Haute-Provence. Tilts derived from the differential tilts are compared with direct tilt measurements. The accuracy of the measurements is currently ~ 1.5 Airy disk rms at 550 nm. These results prove the feasibility of the Polychromatic Laser Guide Star programme ELP-OA. New algorithms based on inverse problems under development within our programme would lead to smaller error bars by 1 magnitude, as soon as they will run fast enough. We describe the ELP-OA demonstrator which we are setting up at the same telescope, with a special emphasis on the optimization of the excitation process, which definitely has to rely on the two-photon excitation of sodium atoms in the mesosphere. We will describe the implementation at the telescope, including the projector device, the focal instrumentation and the NdYAG pumped dye lasers.
Rådholm, Karin; Neal, Bruce
2018-01-01
The Australian Dietary Guidelines (ADGs) and Health Star Rating (HSR) front-of-pack labelling system are two national interventions to promote healthier diets. Our aim was to assess the degree of alignment between the two policies. Methods: Nutrition information was extracted for 65,660 packaged foods available in The George Institute’s Australian FoodSwitch database. Products were classified ‘core’ or ‘discretionary’ based on the ADGs, and a HSR generated irrespective of whether currently displayed on pack. Apparent outliers were identified as those products classified ‘core’ that received HSR ≤ 2.0; and those classified ‘discretionary’ that received HSR ≥ 3.5. Nutrient cut-offs were applied to determine whether apparent outliers were ‘high in’ salt, total sugar or saturated fat, and outlier status thereby attributed to a failure of the ADGs or HSR algorithm. Results: 47,116 products (23,460 core; 23,656 discretionary) were included. Median (Q1, Q3) HSRs were 4.0 (3.0 to 4.5) for core and 2.0 (1.0 to 3.0) for discretionary products. Overall alignment was good: 86.6% of products received a HSR aligned with their ADG classification. Among 6324 products identified as apparent outliers, 5246 (83.0%) were ultimately determined to be ADG failures, largely caused by challenges in defining foods as ‘core’ or ‘discretionary’. In total, 1078 (17.0%) were determined to be true failures of the HSR algorithm. Conclusion: The scope of genuine misalignment between the ADGs and HSR algorithm is very small. We provide evidence-informed recommendations for strengthening both policies to more effectively guide Australians towards healthier choices. PMID:29670024
NASA Astrophysics Data System (ADS)
Dorn-Wallenstein, Trevor Z.; Levesque, Emily
2017-11-01
Thanks to incredible advances in instrumentation, surveys like the Sloan Digital Sky Survey have been able to find and catalog billions of objects, ranging from local M dwarfs to distant quasars. Machine learning algorithms have greatly aided in the effort to classify these objects; however, there are regimes where these algorithms fail, where interesting oddities may be found. We present here an X-ray bright quasar misidentified as a red supergiant/X-ray binary, and a subsequent search of the SDSS quasar catalog for X-ray bright stars misidentified as quasars.
Realtime automatic metal extraction of medical x-ray images for contrast improvement
NASA Astrophysics Data System (ADS)
Prangl, Martin; Hellwagner, Hermann; Spielvogel, Christian; Bischof, Horst; Szkaliczki, Tibor
2006-03-01
This paper focuses on an approach for real-time metal extraction of x-ray images taken from modern x-ray machines like C-arms. Such machines are used for vessel diagnostics, surgical interventions, as well as cardiology, neurology and orthopedic examinations. They are very fast in taking images from different angles. For this reason, manual adjustment of contrast is infeasible and automatic adjustment algorithms have been applied to try to select the optimal radiation dose for contrast adjustment. Problems occur when metallic objects, e.g., a prosthesis or a screw, are in the absorption area of interest. In this case, the automatic adjustment mostly fails because the dark, metallic objects lead the algorithm to overdose the x-ray tube. This outshining effect results in overexposed images and bad contrast. To overcome this limitation, metallic objects have to be detected and extracted from images that are taken as input for the adjustment algorithm. In this paper, we present a real-time solution for extracting metallic objects of x-ray images. We will explore the characteristic features of metallic objects in x-ray images and their distinction from bone fragments which form the basis to find a successful way for object segmentation and classification. Subsequently, we will present our edge based real-time approach for successful and fast automatic segmentation and classification of metallic objects. Finally, experimental results on the effectiveness and performance of our approach based on a vast amount of input image data sets will be presented.
VizieR Online Data Catalog: Fundamental parameters of Kepler stars (Silva Aguirre+, 2015)
NASA Astrophysics Data System (ADS)
Silva Aguirre, V.; Davies, G. R.; Basu, S.; Christensen-Dalsgaard, J.; Creevey, O.; Metcalfe, T. S.; Bedding, T. R.; Casagrande, L.; Handberg, R.; Lund, M. N.; Nissen, P. E.; Chaplin, W. J.; Huber, D.; Serenelli, A. M.; Stello, D.; van Eylen, V.; Campante, T. L.; Elsworth, Y.; Gilliland, R. L.; Hekker, S.; Karoff, C.; Kawaler, S. D.; Kjeldsen, H.; Lundkvist, M. S.
2016-02-01
Our sample has been extracted from the 77 exoplanet host stars presented in Huber et al. (2013, Cat. J/ApJ/767/127). We have made use of the full time-base of observations from the Kepler satellite to uniformly determine precise fundamental stellar parameters, including ages, for a sample of exoplanet host stars where high-quality asteroseismic data were available. We devised a Bayesian procedure flexible in its input and applied it to different grids of models to study systematics from input physics and extract statistically robust properties for all stars. (4 data files).
An FBG acoustic emission source locating system based on PHAT and GA
NASA Astrophysics Data System (ADS)
Shen, Jing-shi; Zeng, Xiao-dong; Li, Wei; Jiang, Ming-shun
2017-09-01
Using the acoustic emission locating technology to monitor the health of the structure is important for ensuring the continuous and healthy operation of the complex engineering structures and large mechanical equipment. In this paper, four fiber Bragg grating (FBG) sensors are used to establish the sensor array to locate the acoustic emission source. Firstly, the nonlinear locating equations are established based on the principle of acoustic emission, and the solution of these equations is transformed into an optimization problem. Secondly, time difference extraction algorithm based on the phase transform (PHAT) weighted generalized cross correlation provides the necessary conditions for the accurate localization. Finally, the genetic algorithm (GA) is used to solve the optimization model. In this paper, twenty points are tested in the marble plate surface, and the results show that the absolute locating error is within the range of 10 mm, which proves the accuracy of this locating method.
A modified active appearance model based on an adaptive artificial bee colony.
Abdulameer, Mohammed Hasan; Sheikh Abdullah, Siti Norul Huda; Othman, Zulaiha Ali
2014-01-01
Active appearance model (AAM) is one of the most popular model-based approaches that have been extensively used to extract features by highly accurate modeling of human faces under various physical and environmental circumstances. However, in such active appearance model, fitting the model with original image is a challenging task. State of the art shows that optimization method is applicable to resolve this problem. However, another common problem is applying optimization. Hence, in this paper we propose an AAM based face recognition technique, which is capable of resolving the fitting problem of AAM by introducing a new adaptive ABC algorithm. The adaptation increases the efficiency of fitting as against the conventional ABC algorithm. We have used three datasets: CASIA dataset, property 2.5D face dataset, and UBIRIS v1 images dataset in our experiments. The results have revealed that the proposed face recognition technique has performed effectively, in terms of accuracy of face recognition.
NASA Technical Reports Server (NTRS)
Cheng, Rendy P.; Tischler, Mark B.; Celi, Roberto
2006-01-01
This research describes a new methodology for the extraction of a high-order, linear time invariant model, which allows the periodicity of the helicopter response to be accurately captured. This model provides the needed level of dynamic fidelity to permit an analysis and optimization of the AFCS and HHC algorithms. The key results of this study indicate that the closed-loop HHC system has little influence on the AFCS or on the vehicle handling qualities, which indicates that the AFCS does not need modification to work with the HHC system. However, the results show that the vibration response to maneuvers must be considered during the HHC design process, and this leads to much higher required HHC loop crossover frequencies. This research also demonstrates that the transient vibration responses during maneuvers can be reduced by optimizing the closed-loop higher harmonic control algorithm using conventional control system analyses.
NASA Astrophysics Data System (ADS)
Zhang, W.; Jia, M. P.
2018-06-01
When incipient fault appear in the rolling bearing, the fault feature is too small and easily submerged in the strong background noise. In this paper, wavelet total variation denoising based on kurtosis (Kurt-WATV) is studied, which can extract the incipient fault feature of the rolling bearing more effectively. The proposed algorithm contains main steps: a) establish a sparse diagnosis model, b) represent periodic impulses based on the redundant wavelet dictionary, c) solve the joint optimization problem by alternating direction method of multipliers (ADMM), d) obtain the reconstructed signal using kurtosis value as criterion and then select optimal wavelet subbands. This paper uses overcomplete rational-dilation wavelet transform (ORDWT) as a dictionary, and adjusts the control parameters to achieve the concentration in the time-frequency plane. Incipient fault of rolling bearing is used as an example, and the result shows that the effectiveness and superiority of the proposed Kurt- WATV bearing fault diagnosis algorithm.
NASA Astrophysics Data System (ADS)
Adiri, Zakaria; El Harti, Abderrazak; Jellouli, Amine; Lhissou, Rachid; Maacha, Lhou; Azmi, Mohamed; Zouhair, Mohamed; Bachaoui, El Mostafa
2017-12-01
Certainly, lineament mapping occupies an important place in several studies, including geology, hydrogeology and topography etc. With the help of remote sensing techniques, lineaments can be better identified due to strong advances in used data and methods. This allowed exceeding the usual classical procedures and achieving more precise results. The aim of this work is the comparison of ASTER, Landsat-8 and Sentinel 1 data sensors in automatic lineament extraction. In addition to image data, the followed approach includes the use of the pre-existing geological map, the Digital Elevation Model (DEM) as well as the ground truth. Through a fully automatic approach consisting of a combination of edge detection algorithm and line-linking algorithm, we have found the optimal parameters for automatic lineament extraction in the study area. Thereafter, the comparison and the validation of the obtained results showed that the Sentinel 1 data are more efficient in restitution of lineaments. This indicates the performance of the radar data compared to those optical in this kind of study.
Medical Ultrasound Video Coding with H.265/HEVC Based on ROI Extraction
Wu, Yueying; Liu, Pengyu; Gao, Yuan; Jia, Kebin
2016-01-01
High-efficiency video compression technology is of primary importance to the storage and transmission of digital medical video in modern medical communication systems. To further improve the compression performance of medical ultrasound video, two innovative technologies based on diagnostic region-of-interest (ROI) extraction using the high efficiency video coding (H.265/HEVC) standard are presented in this paper. First, an effective ROI extraction algorithm based on image textural features is proposed to strengthen the applicability of ROI detection results in the H.265/HEVC quad-tree coding structure. Second, a hierarchical coding method based on transform coefficient adjustment and a quantization parameter (QP) selection process is designed to implement the otherness encoding for ROIs and non-ROIs. Experimental results demonstrate that the proposed optimization strategy significantly improves the coding performance by achieving a BD-BR reduction of 13.52% and a BD-PSNR gain of 1.16 dB on average compared to H.265/HEVC (HM15.0). The proposed medical video coding algorithm is expected to satisfy low bit-rate compression requirements for modern medical communication systems. PMID:27814367
Medical Ultrasound Video Coding with H.265/HEVC Based on ROI Extraction.
Wu, Yueying; Liu, Pengyu; Gao, Yuan; Jia, Kebin
2016-01-01
High-efficiency video compression technology is of primary importance to the storage and transmission of digital medical video in modern medical communication systems. To further improve the compression performance of medical ultrasound video, two innovative technologies based on diagnostic region-of-interest (ROI) extraction using the high efficiency video coding (H.265/HEVC) standard are presented in this paper. First, an effective ROI extraction algorithm based on image textural features is proposed to strengthen the applicability of ROI detection results in the H.265/HEVC quad-tree coding structure. Second, a hierarchical coding method based on transform coefficient adjustment and a quantization parameter (QP) selection process is designed to implement the otherness encoding for ROIs and non-ROIs. Experimental results demonstrate that the proposed optimization strategy significantly improves the coding performance by achieving a BD-BR reduction of 13.52% and a BD-PSNR gain of 1.16 dB on average compared to H.265/HEVC (HM15.0). The proposed medical video coding algorithm is expected to satisfy low bit-rate compression requirements for modern medical communication systems.
K2 Variable Catalogue: Variable stars and eclipsing binaries in K2 campaigns 1 and 0
NASA Astrophysics Data System (ADS)
Armstrong, D. J.; Kirk, J.; Lam, K. W. F.; McCormac, J.; Walker, S. R.; Brown, D. J. A.; Osborn, H. P.; Pollacco, D. L.; Spake, J.
2015-07-01
Aims: We have created a catalogue of variable stars found from a search of the publicly available K2 mission data from Campaigns 1 and 0. This catalogue provides the identifiers of 8395 variable stars, including 199 candidate eclipsing binaries with periods up to 60 d and 3871 periodic or quasi-periodic objects, with periods up to 20 d for Campaign 1 and 15 d for Campaign 0. Methods: Lightcurves are extracted and detrended from the available data. These are searched using a combination of algorithmic and human classification, leading to a classifier for each object as an eclipsing binary, sinusoidal periodic, quasi periodic, or aperiodic variable. The source of the variability is not identified, but could arise in the non-eclipsing binary cases from pulsation or stellar activity. Each object is cross-matched against variable star related guest observer proposals to the K2 mission, which specifies the variable type in some cases. The detrended lightcurves are also compared to lightcurves currently publicly available. Results: The resulting catalogue gives the ID, type, period, semi-amplitude, and range of the variation seen. We also make available the detrended lightcurves for each object. The catalogue is available at http://deneb.astro.warwick.ac.uk/phrlbj/k2varcat/ and at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (ftp://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/579/A19
NASA Astrophysics Data System (ADS)
Cao, Y.; Cervone, G.; Barkley, Z.; Lauvaux, T.; Deng, A.; Miles, N.; Richardson, S.
2016-12-01
Fugitive methane emission rates for the Marcellus shale area are estimated using a genetic algorithm that finds optimal weights to minimize the error between simulated and observed concentrations. The overall goal is to understand the relative contribution of methane due to Shale gas extraction. Methane sensors were installed on four towers located in northeastern Pennsylvania to measure atmospheric concentrations since May 2015. Inverse Lagrangian dispersion model runs are performed from each of these tower locations for each hour of 2015. Simulated methane concentrations at each of the four towers are computed by multiplying the resulting footprints from the atmospheric simulations by thousands of emission sources grouped into 11 classes. The emission sources were identified using GIS techniques, and include conventional and unconventional wells, different types of compressor stations, pipelines, landfills, farming and wetlands. Initial estimates for each source are calculated based on emission factors from EPA and few regional studies. A genetic algorithm is then used to identify optimal emission rates for the 11 classes of methane emissions and to explore extreme events and spatial and temporal structures in the emissions associated with natural gas activities.
NASA Astrophysics Data System (ADS)
Lackey, Benjamin D.; Kyutoku, Koutarou; Shibata, Masaru; Brady, Patrick R.; Friedman, John L.
2014-02-01
Information about the neutron-star equation of state is encoded in the waveform of a black hole-neutron star system through tidal interactions and the possible tidal disruption of the neutron star. During the inspiral this information depends on the tidal deformability Λ of the neutron star, and we find that the best-measured parameter during the merger and ringdown is consistent with Λ as well. We performed 134 simulations where we systematically varied the equation of state as well as the mass ratio, neutron star mass, and aligned spin of the black hole. Using these simulations we develop an analytic representation of the full inspiral-merger-ringdown waveform calibrated to these numerical waveforms; we use this analytic waveform and a Fisher matrix analysis to estimate the accuracy to which Λ can be measured with gravitational-wave detectors. We find that although the inspiral tidal signal is small, coherently combining this signal with the merger-ringdown matter effect improves the measurability of Λ by a factor of ˜3 over using just the merger-ringdown matter effect alone. However, incorporating correlations between all the waveform parameters then decreases the measurability of Λ by a factor of ˜3. The uncertainty in Λ increases with the mass ratio, but decreases as the black hole spin increases. Overall, a single Advanced LIGO detector can only marginally measure Λ for mass ratios Q =2-5, black hole spins JBH/MBH2=-0.5-0.75, and neutron star masses MNS=1.2M⊙-1.45M⊙ at an optimally oriented distance of 100 Mpc. For the proposed Einstein Telescope, however, the uncertainty in Λ is an order of magnitude smaller.
Image-based path planning for automated virtual colonoscopy navigation
NASA Astrophysics Data System (ADS)
Hong, Wei
2008-03-01
Virtual colonoscopy (VC) is a noninvasive method for colonic polyp screening, by reconstructing three-dimensional models of the colon using computerized tomography (CT). In virtual colonoscopy fly-through navigation, it is crucial to generate an optimal camera path for efficient clinical examination. In conventional methods, the centerline of the colon lumen is usually used as the camera path. In order to extract colon centerline, some time consuming pre-processing algorithms must be performed before the fly-through navigation, such as colon segmentation, distance transformation, or topological thinning. In this paper, we present an efficient image-based path planning algorithm for automated virtual colonoscopy fly-through navigation without the requirement of any pre-processing. Our algorithm only needs the physician to provide a seed point as the starting camera position using 2D axial CT images. A wide angle fisheye camera model is used to generate a depth image from the current camera position. Two types of navigational landmarks, safe regions and target regions are extracted from the depth images. Camera position and its corresponding view direction are then determined using these landmarks. The experimental results show that the generated paths are accurate and increase the user comfort during the fly-through navigation. Moreover, because of the efficiency of our path planning algorithm and rendering algorithm, our VC fly-through navigation system can still guarantee 30 FPS.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Singer, Leo P.; Cenko, S. Bradley; Gehrels, Neil
This is a supplement to the Letter of Singer et al., in which we demonstrated a rapid algorithm for obtaining joint 3D estimates of sky location and luminosity distance from observations of binary neutron star mergers with Advanced LIGO and Virgo. We argued that combining the reconstructed volumes with positions and redshifts of possible host galaxies can provide large-aperture but small field of view instruments with a manageable list of targets to search for optical or infrared emission. In this Supplement, we document the new HEALPix-based file format for 3D localizations of gravitational-wave transients. We include Python sample code tomore » show the reader how to perform simple manipulations of the 3D sky maps and extract ranked lists of likely host galaxies. Finally, we include mathematical details of the rapid volume reconstruction algorithm.« less
Correcting STIS CCD Point-Source Spectra for CTE Loss
NASA Technical Reports Server (NTRS)
Goudfrooij, Paul; Bohlin, Ralph C.; Maiz-Apellaniz, Jesus
2006-01-01
We review the on-orbit spectroscopic observations that are being used to characterize the Charge Transfer Efficiency (CTE) of the STIS CCD in spectroscopic mode. We parameterize the CTE-related loss for spectrophotometry of point sources in terms of dependencies on the brightness of the source, the background level, the signal in the PSF outside the standard extraction box, and the time of observation. Primary constraints on our correction algorithm are provided by measurements of the CTE loss rates for simulated spectra (images of a tungsten lamp taken through slits oriented along the dispersion axis) combined with estimates of CTE losses for actual spectra of spectrophotometric standard stars in the first order CCD modes. For point-source spectra at the standard reference position at the CCD center, CTE losses as large as 30% are corrected to within approx.1% RMS after application of the algorithm presented here, rendering the Poisson noise associated with the source detection itself to be the dominant contributor to the total flux calibration uncertainty.
A Simulation-Optimization Model for the Management of Seawater Intrusion
NASA Astrophysics Data System (ADS)
Stanko, Z.; Nishikawa, T.
2012-12-01
Seawater intrusion is a common problem in coastal aquifers where excessive groundwater pumping can lead to chloride contamination of a freshwater resource. Simulation-optimization techniques have been developed to determine optimal management strategies while mitigating seawater intrusion. The simulation models are often density-independent groundwater-flow models that may assume a sharp interface and/or use equivalent freshwater heads. The optimization methods are often linear-programming (LP) based techniques that that require simplifications of the real-world system. However, seawater intrusion is a highly nonlinear, density-dependent flow and transport problem, which requires the use of nonlinear-programming (NLP) or global-optimization (GO) techniques. NLP approaches are difficult because of the need for gradient information; therefore, we have chosen a GO technique for this study. Specifically, we have coupled a multi-objective genetic algorithm (GA) with a density-dependent groundwater-flow and transport model to simulate and identify strategies that optimally manage seawater intrusion. GA is a heuristic approach, often chosen when seeking optimal solutions to highly complex and nonlinear problems where LP or NLP methods cannot be applied. The GA utilized in this study is the Epsilon-Nondominated Sorted Genetic Algorithm II (ɛ-NSGAII), which can approximate a pareto-optimal front between competing objectives. This algorithm has several key features: real and/or binary variable capabilities; an efficient sorting scheme; preservation and diversity of good solutions; dynamic population sizing; constraint handling; parallelizable implementation; and user controlled precision for each objective. The simulation model is SEAWAT, the USGS model that couples MODFLOW with MT3DMS for variable-density flow and transport. ɛ-NSGAII and SEAWAT were efficiently linked together through a C-Fortran interface. The simulation-optimization model was first tested by using a published density-independent flow model test case that was originally solved using a sequential LP method with the USGS's Ground-Water Management Process (GWM). For the problem formulation, the objective is to maximize net groundwater extraction, subject to head and head-gradient constraints. The decision variables are pumping rates at fixed wells and the system's state is represented with freshwater hydraulic head. The results of the proposed algorithm were similar to the published results (within 1%); discrepancies may be attributed to differences in the simulators and inherent differences between LP and GA. The GWM test case was then extended to a density-dependent flow and transport version. As formulated, the optimization problem is infeasible because of the density effects on hydraulic head. Therefore, the sum of the squared constraint violation (SSC) was used as a second objective. The result is a pareto curve showing optimal pumping rates versus the SSC. Analysis of this curve indicates that a similar net-extraction rate to the test case can be obtained with a minor violation in vertical head-gradient constraints. This study shows that a coupled ɛ-NSGAII/SEAWAT model can be used for the management of groundwater seawater intrusion. In the future, the proposed methodology will be applied to a real-world seawater intrusion and resource management problem for Santa Barbara, CA.
Huang, Tao; Li, Xiao-yu; Xu, Meng-ling; Jin, Rui; Ku, Jing; Xu, Sen-miao; Wu, Zhen-zhong
2015-01-01
The quality of potato is directly related to their edible value and industrial value. Hollow heart of potato, as a physiological disease occurred inside the tuber, is difficult to be detected. This paper put forward a non-destructive detection method by using semi-transmission hyperspectral imaging with support vector machine (SVM) to detect hollow heart of potato. Compared to reflection and transmission hyperspectral image, semi-transmission hyperspectral image can get clearer image which contains the internal quality information of agricultural products. In this study, 224 potato samples (149 normal samples and 75 hollow samples) were selected as the research object, and semi-transmission hyperspectral image acquisition system was constructed to acquire the hyperspectral images (390-1 040 nn) of the potato samples, and then the average spectrum of region of interest were extracted for spectral characteristics analysis. Normalize was used to preprocess the original spectrum, and prediction model were developed based on SVM using all wave bands, the accurate recognition rate of test set is only 87. 5%. In order to simplify the model competitive.adaptive reweighed sampling algorithm (CARS) and successive projection algorithm (SPA) were utilized to select important variables from the all 520 spectral variables and 8 variables were selected (454, 601, 639, 664, 748, 827, 874 and 936 nm). 94. 64% of the accurate recognition rate of test set was obtained by using the 8 variables to develop SVM model. Parameter optimization algorithms, including artificial fish swarm algorithm (AFSA), genetic algorithm (GA) and grid search algorithm, were used to optimize the SVM model parameters: penalty parameter c and kernel parameter g. After comparative analysis, AFSA, a new bionic optimization algorithm based on the foraging behavior of fish swarm, was proved to get the optimal model parameter (c=10. 659 1, g=0. 349 7), and the recognition accuracy of 10% were obtained for the AFSA-SVM model. The results indicate that combining the semi-transmission hyperspectral imaging technology with CARS-SPA and AFSA-SVM can accurately detect hollow heart of potato, and also provide technical support for rapid non-destructive detecting of hollow heart of potato.
Mathieson, Luke; Mendes, Alexandre; Marsden, John; Pond, Jeffrey; Moscato, Pablo
2017-01-01
This chapter introduces a new method for knowledge extraction from databases for the purpose of finding a discriminative set of features that is also a robust set for within-class classification. Our method is generic and we introduce it here in the field of breast cancer diagnosis from digital mammography data. The mathematical formalism is based on a generalization of the k-Feature Set problem called (α, β)-k-Feature Set problem, introduced by Cotta and Moscato (J Comput Syst Sci 67(4):686-690, 2003). This method proceeds in two steps: first, an optimal (α, β)-k-feature set of minimum cardinality is identified and then, a set of classification rules using these features is obtained. We obtain the (α, β)-k-feature set in two phases; first a series of extremely powerful reduction techniques, which do not lose the optimal solution, are employed; and second, a metaheuristic search to identify the remaining features to be considered or disregarded. Two algorithms were tested with a public domain digital mammography dataset composed of 71 malignant and 75 benign cases. Based on the results provided by the algorithms, we obtain classification rules that employ only a subset of these features.
Social Milieu Oriented Routing: A New Dimension to Enhance Network Security in WSNs.
Liu, Lianggui; Chen, Li; Jia, Huiling
2016-02-19
In large-scale wireless sensor networks (WSNs), in order to enhance network security, it is crucial for a trustor node to perform social milieu oriented routing to a target a trustee node to carry out trust evaluation. This challenging social milieu oriented routing with more than one end-to-end Quality of Trust (QoT) constraint has proved to be NP-complete. Heuristic algorithms with polynomial and pseudo-polynomial-time complexities are often used to deal with this challenging problem. However, existing solutions cannot guarantee the efficiency of searching; that is, they can hardly avoid obtaining partial optimal solutions during a searching process. Quantum annealing (QA) uses delocalization and tunneling to avoid falling into local minima without sacrificing execution time. This has been proven a promising way to many optimization problems in recently published literatures. In this paper, for the first time, with the help of a novel approach, that is, configuration path-integral Monte Carlo (CPIMC) simulations, a QA-based optimal social trust path (QA_OSTP) selection algorithm is applied to the extraction of the optimal social trust path in large-scale WSNs. Extensive experiments have been conducted, and the experiment results demonstrate that QA_OSTP outperforms its heuristic opponents.
Miao, Minmin; Zeng, Hong; Wang, Aimin; Zhao, Changsen; Liu, Feixiang
2017-02-15
Common spatial pattern (CSP) is most widely used in motor imagery based brain-computer interface (BCI) systems. In conventional CSP algorithm, pairs of the eigenvectors corresponding to both extreme eigenvalues are selected to construct the optimal spatial filter. In addition, an appropriate selection of subject-specific time segments and frequency bands plays an important role in its successful application. This study proposes to optimize spatial-frequency-temporal patterns for discriminative feature extraction. Spatial optimization is implemented by channel selection and finding discriminative spatial filters adaptively on each time-frequency segment. A novel Discernibility of Feature Sets (DFS) criteria is designed for spatial filter optimization. Besides, discriminative features located in multiple time-frequency segments are selected automatically by the proposed sparse time-frequency segment common spatial pattern (STFSCSP) method which exploits sparse regression for significant features selection. Finally, a weight determined by the sparse coefficient is assigned for each selected CSP feature and we propose a Weighted Naïve Bayesian Classifier (WNBC) for classification. Experimental results on two public EEG datasets demonstrate that optimizing spatial-frequency-temporal patterns in a data-driven manner for discriminative feature extraction greatly improves the classification performance. The proposed method gives significantly better classification accuracies in comparison with several competing methods in the literature. The proposed approach is a promising candidate for future BCI systems. Copyright © 2016 Elsevier B.V. All rights reserved.
A reflection model for eclipsing binary stars
NASA Technical Reports Server (NTRS)
Wood, D. B.
1973-01-01
A highly accurate reflection model has been developed which emphasizes efficiency of computer calculation. It is assumed that the heating of the irradiated star must depend upon the following properties of the irradiating star: (1) effective temperature; (2) apparent area as seen from a point on the surface of the irradiated star; (3) limb darkening; and (4) zenith distance of the apparent centre as seen from a point on the surface of the irradiated star. The algorithm eliminates the need to integrate over the irradiating star while providing a highly accurate representation of the integrated bolometric flux, even for gravitationally distorted stars.
Detection of Lettuce Discoloration Using Hyperspectral Reflectance Imaging
Mo, Changyeun; Kim, Giyoung; Lim, Jongguk; Kim, Moon S.; Cho, Hyunjeong; Cho, Byoung-Kwan
2015-01-01
Rapid visible/near-infrared (VNIR) hyperspectral imaging methods, employing both a single waveband algorithm and multi-spectral algorithms, were developed in order to discrimination between sound and discolored lettuce. Reflectance spectra for sound and discolored lettuce surfaces were extracted from hyperspectral reflectance images obtained in the 400–1000 nm wavelength range. The optimal wavebands for discriminating between discolored and sound lettuce surfaces were determined using one-way analysis of variance. Multi-spectral imaging algorithms developed using ratio and subtraction functions resulted in enhanced classification accuracy of above 99.9% for discolored and sound areas on both adaxial and abaxial lettuce surfaces. Ratio imaging (RI) and subtraction imaging (SI) algorithms at wavelengths of 552/701 nm and 557–701 nm, respectively, exhibited better classification performances compared to results obtained for all possible two-waveband combinations. These results suggest that hyperspectral reflectance imaging techniques can potentially be used to discriminate between discolored and sound fresh-cut lettuce. PMID:26610510
Detection of Lettuce Discoloration Using Hyperspectral Reflectance Imaging.
Mo, Changyeun; Kim, Giyoung; Lim, Jongguk; Kim, Moon S; Cho, Hyunjeong; Cho, Byoung-Kwan
2015-11-20
Rapid visible/near-infrared (VNIR) hyperspectral imaging methods, employing both a single waveband algorithm and multi-spectral algorithms, were developed in order to discrimination between sound and discolored lettuce. Reflectance spectra for sound and discolored lettuce surfaces were extracted from hyperspectral reflectance images obtained in the 400-1000 nm wavelength range. The optimal wavebands for discriminating between discolored and sound lettuce surfaces were determined using one-way analysis of variance. Multi-spectral imaging algorithms developed using ratio and subtraction functions resulted in enhanced classification accuracy of above 99.9% for discolored and sound areas on both adaxial and abaxial lettuce surfaces. Ratio imaging (RI) and subtraction imaging (SI) algorithms at wavelengths of 552/701 nm and 557-701 nm, respectively, exhibited better classification performances compared to results obtained for all possible two-waveband combinations. These results suggest that hyperspectral reflectance imaging techniques can potentially be used to discriminate between discolored and sound fresh-cut lettuce.
An improved finger-vein recognition algorithm based on template matching
NASA Astrophysics Data System (ADS)
Liu, Yueyue; Di, Si; Jin, Jian; Huang, Daoping
2016-10-01
Finger-vein recognition has became the most popular biometric identify methods. The investigation on the recognition algorithms always is the key point in this field. So far, there are many applicable algorithms have been developed. However, there are still some problems in practice, such as the variance of the finger position which may lead to the image distortion and shifting; during the identification process, some matching parameters determined according to experience may also reduce the adaptability of algorithm. Focus on above mentioned problems, this paper proposes an improved finger-vein recognition algorithm based on template matching. In order to enhance the robustness of the algorithm for the image distortion, the least squares error method is adopted to correct the oblique finger. During the feature extraction, local adaptive threshold method is adopted. As regard as the matching scores, we optimized the translation preferences as well as matching distance between the input images and register images on the basis of Naoto Miura algorithm. Experimental results indicate that the proposed method can improve the robustness effectively under the finger shifting and rotation conditions.
NASA Astrophysics Data System (ADS)
Ray, Shonket; Keller, Brad M.; Chen, Jinbo; Conant, Emily F.; Kontos, Despina
2016-03-01
This work details a methodology to obtain optimal parameter values for a locally-adaptive texture analysis algorithm that extracts mammographic texture features representative of breast parenchymal complexity for predicting falsepositive (FP) recalls from breast cancer screening with digital mammography. The algorithm has two components: (1) adaptive selection of localized regions of interest (ROIs) and (2) Haralick texture feature extraction via Gray- Level Co-Occurrence Matrices (GLCM). The following parameters were systematically varied: mammographic views used, upper limit of the ROI window size used for adaptive ROI selection, GLCM distance offsets, and gray levels (binning) used for feature extraction. Each iteration per parameter set had logistic regression with stepwise feature selection performed on a clinical screening cohort of 474 non-recalled women and 68 FP recalled women; FP recall prediction was evaluated using area under the curve (AUC) of the receiver operating characteristic (ROC) and associations between the extracted features and FP recall were assessed via odds ratios (OR). A default instance of mediolateral (MLO) view, upper ROI size limit of 143.36 mm (2048 pixels2), GLCM distance offset combination range of 0.07 to 0.84 mm (1 to 12 pixels) and 16 GLCM gray levels was set. The highest ROC performance value of AUC=0.77 [95% confidence intervals: 0.71-0.83] was obtained at three specific instances: the default instance, upper ROI window equal to 17.92 mm (256 pixels2), and gray levels set to 128. The texture feature of sum average was chosen as a statistically significant (p<0.05) predictor and associated with higher odds of FP recall for 12 out of 14 total instances.
Chao, Ming; Wei, Jie; Li, Tianfang; Yuan, Yading; Rosenzweig, Kenneth E; Lo, Yeh-Chi
2017-01-01
We present a study of extracting respiratory signals from cone beam computed tomography (CBCT) projections within the framework of the Amsterdam Shroud (AS) technique. Acquired prior to the radiotherapy treatment, CBCT projections were preprocessed for contrast enhancement by converting the original intensity images to attenuation images with which the AS image was created. An adaptive robust z-normalization filtering was applied to further augment the weak oscillating structures locally. From the enhanced AS image, the respiratory signal was extracted using a two-step optimization approach to effectively reveal the large-scale regularity of the breathing signals. CBCT projection images from five patients acquired with the Varian Onboard Imager on the Clinac iX System Linear Accelerator (Varian Medical Systems, Palo Alto, CA) were employed to assess the proposed technique. Stable breathing signals can be reliably extracted using the proposed algorithm. Reference waveforms obtained using an air bellows belt (Philips Medical Systems, Cleveland, OH) were exported and compared to those with the AS based signals. The average errors for the enrolled patients between the estimated breath per minute (bpm) and the reference waveform bpm can be as low as −0.07 with the standard deviation 1.58. The new algorithm outperformed the original AS technique for all patients by 8.5% to 30%. The impact of gantry rotation on the breathing signal was assessed with data acquired with a Quasar phantom (Modus Medical Devices Inc., London, Canada) and found to be minimal on the signal frequency. The new technique developed in this work will provide a practical solution to rendering markerless breathing signal using the CBCT projections for thoracic and abdominal patients. PMID:27008349
Retrospective Cost Adaptive Control with Concurrent Closed-Loop Identification
NASA Astrophysics Data System (ADS)
Sobolic, Frantisek M.
Retrospective cost adaptive control (RCAC) is a discrete-time direct adaptive control algorithm for stabilization, command following, and disturbance rejection. RCAC is known to work on systems given minimal modeling information which is the leading numerator coefficient and any nonminimum-phase (NMP) zeros of the plant transfer function. This information is normally needed a priori and is key in the development of the filter, also known as the target model, within the retrospective performance variable. A novel approach to alleviate the need for prior modeling of both the leading coefficient of the plant transfer function as well as any NMP zeros is developed. The extension to the RCAC algorithm is the use of concurrent optimization of both the target model and the controller coefficients. Concurrent optimization of the target model and controller coefficients is a quadratic optimization problem in the target model and controller coefficients separately. However, this optimization problem is not convex as a joint function of both variables, and therefore nonconvex optimization methods are needed. Finally, insights within RCAC that include intercalated injection between the controller numerator and the denominator, unveil the workings of RCAC fitting a specific closed-loop transfer function to the target model. We exploit this interpretation by investigating several closed-loop identification architectures in order to extract this information for use in the target model.
Effects of Combined Stellar Feedback on Star Formation in Stellar Clusters
NASA Astrophysics Data System (ADS)
Wall, Joshua Edward; McMillan, Stephen; Pellegrino, Andrew; Mac Low, Mordecai; Klessen, Ralf; Portegies Zwart, Simon
2018-01-01
We present results of hybrid MHD+N-body simulations of star cluster formation and evolution including self consistent feedback from the stars in the form of radiation, winds, and supernovae from all stars more massive than 7 solar masses. The MHD is modeled with the adaptive mesh refinement code FLASH, while the N-body computations are done with a direct algorithm. Radiation is modeled using ray tracing along long characteristics in directions distributed using the HEALPIX algorithm, and causes ionization and momentum deposition, while winds and supernova conserve momentum and energy during injection. Stellar evolution is followed using power-law fits to evolution models in SeBa. We use a gravity bridge within the AMUSE framework to couple the N-body dynamics of the stars to the gas dynamics in FLASH. Feedback from the massive stars alters the structure of young clusters as gas ejection occurs. We diagnose this behavior by distinguishing between fractal distribution and central clustering using a Q parameter computed from the minimum spanning tree of each model cluster. Global effects of feedback in our simulations will also be discussed.
Agounad, Said; Aassif, El Houcein; Khandouch, Younes; Maze, Gérard; Décultot, Dominique
2018-02-01
The acoustic scattering of a plane wave by an elastic cylindrical shell is studied. A new approach is developed to predict the form function of an immersed cylindrical shell of the radius ratio b/a ('b' is the inner radius and 'a' is the outer radius). The prediction of the backscattered form function is investigated by a combined approach between fuzzy clustering algorithms and bio-inspired algorithms. Four famous fuzzy clustering algorithms: the fuzzy c-means (FCM), the Gustafson-Kessel algorithm (GK), the fuzzy c-regression model (FCRM) and the Gath-Geva algorithm (GG) are combined with particle swarm optimization and genetic algorithm. The symmetric and antisymmetric circumferential waves A, S 0 , A 1 , S 1 and S 2 are investigated in a reduced frequency (k 1 a) range extends over 0.1
Herscovici, Sarah; Pe'er, Avivit; Papyan, Surik; Lavie, Peretz
2007-02-01
Scoring of REM sleep based on polysomnographic recordings is a laborious and time-consuming process. The growing number of ambulatory devices designed for cost-effective home-based diagnostic sleep recordings necessitates the development of a reliable automatic REM sleep detection algorithm that is not based on the traditional electroencephalographic, electrooccolographic and electromyographic recordings trio. This paper presents an automatic REM detection algorithm based on the peripheral arterial tone (PAT) signal and actigraphy which are recorded with an ambulatory wrist-worn device (Watch-PAT100). The PAT signal is a measure of the pulsatile volume changes at the finger tip reflecting sympathetic tone variations. The algorithm was developed using a training set of 30 patients recorded simultaneously with polysomnography and Watch-PAT100. Sleep records were divided into 5 min intervals and two time series were constructed from the PAT amplitudes and PAT-derived inter-pulse periods in each interval. A prediction function based on 16 features extracted from the above time series that determines the likelihood of detecting a REM epoch was developed. The coefficients of the prediction function were determined using a genetic algorithm (GA) optimizing process tuned to maximize a price function depending on the sensitivity, specificity and agreement of the algorithm in comparison with the gold standard of polysomnographic manual scoring. Based on a separate validation set of 30 patients overall sensitivity, specificity and agreement of the automatic algorithm to identify standard 30 s epochs of REM sleep were 78%, 92%, 89%, respectively. Deploying this REM detection algorithm in a wrist worn device could be very useful for unattended ambulatory sleep monitoring. The innovative method of optimization using a genetic algorithm has been proven to yield robust results in the validation set.
An efficient algorithm for function optimization: modified stem cells algorithm
NASA Astrophysics Data System (ADS)
Taherdangkoo, Mohammad; Paziresh, Mahsa; Yazdi, Mehran; Bagheri, Mohammad Hadi
2013-03-01
In this paper, we propose an optimization algorithm based on the intelligent behavior of stem cell swarms in reproduction and self-organization. Optimization algorithms, such as the Genetic Algorithm (GA), Particle Swarm Optimization (PSO) algorithm, Ant Colony Optimization (ACO) algorithm and Artificial Bee Colony (ABC) algorithm, can give solutions to linear and non-linear problems near to the optimum for many applications; however, in some case, they can suffer from becoming trapped in local optima. The Stem Cells Algorithm (SCA) is an optimization algorithm inspired by the natural behavior of stem cells in evolving themselves into new and improved cells. The SCA avoids the local optima problem successfully. In this paper, we have made small changes in the implementation of this algorithm to obtain improved performance over previous versions. Using a series of benchmark functions, we assess the performance of the proposed algorithm and compare it with that of the other aforementioned optimization algorithms. The obtained results prove the superiority of the Modified Stem Cells Algorithm (MSCA).
A continuous arc delivery optimization algorithm for CyberKnife m6.
Kearney, Vasant; Descovich, Martina; Sudhyadhom, Atchar; Cheung, Joey P; McGuinness, Christopher; Solberg, Timothy D
2018-06-01
This study aims to reduce the delivery time of CyberKnife m6 treatments by allowing for noncoplanar continuous arc delivery. To achieve this, a novel noncoplanar continuous arc delivery optimization algorithm was developed for the CyberKnife m6 treatment system (CyberArc-m6). CyberArc-m6 uses a five-step overarching strategy, in which an initial set of beam geometries is determined, the robotic delivery path is calculated, direct aperture optimization is conducted, intermediate MLC configurations are extracted, and the final beam weights are computed for the continuous arc radiation source model. This algorithm was implemented on five prostate and three brain patients, previously planned using a conventional step-and-shoot CyberKnife m6 delivery technique. The dosimetric quality of the CyberArc-m6 plans was assessed using locally confined mutual information (LCMI), conformity index (CI), heterogeneity index (HI), and a variety of common clinical dosimetric objectives. Using conservative optimization tuning parameters, CyberArc-m6 plans were able to achieve an average CI difference of 0.036 ± 0.025, an average HI difference of 0.046 ± 0.038, and an average LCMI of 0.920 ± 0.030 compared with the original CyberKnife m6 plans. Including a 5 s per minute image alignment time and a 5-min setup time, conservative CyberArc-m6 plans achieved an average treatment delivery speed up of 1.545x ± 0.305x compared with step-and-shoot plans. The CyberArc-m6 algorithm was able to achieve dosimetrically similar plans compared to their step-and-shoot CyberKnife m6 counterparts, while simultaneously reducing treatment delivery times. © 2018 American Association of Physicists in Medicine.
Sun, Ting; Xing, Fei; You, Zheng; Wang, Xiaochu; Li, Bin
2014-03-10
The star tracker is one of the most promising attitude measurement devices widely used in spacecraft for its high accuracy. High dynamic performance is becoming its major restriction, and requires immediate focus and promotion. A star image restoration approach based on the motion degradation model of variable angular velocity is proposed in this paper. This method can overcome the problem of energy dispersion and signal to noise ratio (SNR) decrease resulting from the smearing of the star spot, thus preventing failed extraction and decreased star centroid accuracy. Simulations and laboratory experiments are conducted to verify the proposed methods. The restoration results demonstrate that the described method can recover the star spot from a long motion trail to the shape of Gaussian distribution under the conditions of variable angular velocity and long exposure time. The energy of the star spot can be concentrated to ensure high SNR and high position accuracy. These features are crucial to the subsequent star extraction and the whole performance of the star tracker.
Control strategies for wind farm power optimization: LES study
NASA Astrophysics Data System (ADS)
Ciri, Umberto; Rotea, Mario; Leonardi, Stefano
2017-11-01
Turbines in wind farms operate in off-design conditions as wake interactions occur for particular wind directions. Advanced wind farm control strategies aim at coordinating and adjusting turbine operations to mitigate power losses in such conditions. Coordination is achieved by controlling on upstream turbines either the wake intensity, through the blade pitch angle or the generator torque, or the wake direction, through yaw misalignment. Downstream turbines can be adapted to work in waked conditions and limit power losses, using the blade pitch angle or the generator torque. As wind conditions in wind farm operations may change significantly, it is difficult to determine and parameterize the variations of the coordinated optimal settings. An alternative is model-free control and optimization of wind farms, which does not require any parameterization and can track the optimal settings as conditions vary. In this work, we employ a model-free optimization algorithm, extremum-seeking control, to find the optimal set-points of generator torque, blade pitch and yaw angle for a three-turbine configuration. Large-Eddy Simulations are used to provide a virtual environment to evaluate the performance of the control strategies under realistic, unsteady incoming wind. This work was supported by the National Science Foundation, Grants No. 1243482 (the WINDINSPIRE project) and IIP 1362033 (I/UCRC WindSTAR). TACC is acknowledged for providing computational time.
Compression of next-generation sequencing quality scores using memetic algorithm
2014-01-01
Background The exponential growth of next-generation sequencing (NGS) derived DNA data poses great challenges to data storage and transmission. Although many compression algorithms have been proposed for DNA reads in NGS data, few methods are designed specifically to handle the quality scores. Results In this paper we present a memetic algorithm (MA) based NGS quality score data compressor, namely MMQSC. The algorithm extracts raw quality score sequences from FASTQ formatted files, and designs compression codebook using MA based multimodal optimization. The input data is then compressed in a substitutional manner. Experimental results on five representative NGS data sets show that MMQSC obtains higher compression ratio than the other state-of-the-art methods. Particularly, MMQSC is a lossless reference-free compression algorithm, yet obtains an average compression ratio of 22.82% on the experimental data sets. Conclusions The proposed MMQSC compresses NGS quality score data effectively. It can be utilized to improve the overall compression ratio on FASTQ formatted files. PMID:25474747
Gradient maintenance: A new algorithm for fast online replanning
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ahunbay, Ergun E., E-mail: eahunbay@mcw.edu; Li, X. Allen
2015-06-15
Purpose: Clinical use of online adaptive replanning has been hampered by the unpractically long time required to delineate volumes based on the image of the day. The authors propose a new replanning algorithm, named gradient maintenance (GM), which does not require the delineation of organs at risk (OARs), and can enhance automation, drastically reducing planning time and improving consistency and throughput of online replanning. Methods: The proposed GM algorithm is based on the hypothesis that if the dose gradient toward each OAR in daily anatomy can be maintained the same as that in the original plan, the intended plan qualitymore » of the original plan would be preserved in the adaptive plan. The algorithm requires a series of partial concentric rings (PCRs) to be automatically generated around the target toward each OAR on the planning and the daily images. The PCRs are used in the daily optimization objective function. The PCR dose constraints are generated with dose–volume data extracted from the original plan. To demonstrate this idea, GM plans generated using daily images acquired using an in-room CT were compared to regular optimization and image guided radiation therapy repositioning plans for representative prostate and pancreatic cancer cases. Results: The adaptive replanning using the GM algorithm, requiring only the target contour from the CT of the day, can be completed within 5 min without using high-power hardware. The obtained adaptive plans were almost as good as the regular optimization plans and were better than the repositioning plans for the cases studied. Conclusions: The newly proposed GM replanning algorithm, requiring only target delineation, not full delineation of OARs, substantially increased planning speed for online adaptive replanning. The preliminary results indicate that the GM algorithm may be a solution to improve the ability for automation and may be especially suitable for sites with small-to-medium size targets surrounded by several critical structures.« less
On-demand high-capacity ride-sharing via dynamic trip-vehicle assignment
Alonso-Mora, Javier; Samaranayake, Samitha; Wallar, Alex; Frazzoli, Emilio; Rus, Daniela
2017-01-01
Ride-sharing services are transforming urban mobility by providing timely and convenient transportation to anybody, anywhere, and anytime. These services present enormous potential for positive societal impacts with respect to pollution, energy consumption, congestion, etc. Current mathematical models, however, do not fully address the potential of ride-sharing. Recently, a large-scale study highlighted some of the benefits of car pooling but was limited to static routes with two riders per vehicle (optimally) or three (with heuristics). We present a more general mathematical model for real-time high-capacity ride-sharing that (i) scales to large numbers of passengers and trips and (ii) dynamically generates optimal routes with respect to online demand and vehicle locations. The algorithm starts from a greedy assignment and improves it through a constrained optimization, quickly returning solutions of good quality and converging to the optimal assignment over time. We quantify experimentally the tradeoff between fleet size, capacity, waiting time, travel delay, and operational costs for low- to medium-capacity vehicles, such as taxis and van shuttles. The algorithm is validated with ∼3 million rides extracted from the New York City taxicab public dataset. Our experimental study considers ride-sharing with rider capacity of up to 10 simultaneous passengers per vehicle. The algorithm applies to fleets of autonomous vehicles and also incorporates rebalancing of idling vehicles to areas of high demand. This framework is general and can be used for many real-time multivehicle, multitask assignment problems. PMID:28049820
Remote sensing imagery classification using multi-objective gravitational search algorithm
NASA Astrophysics Data System (ADS)
Zhang, Aizhu; Sun, Genyun; Wang, Zhenjie
2016-10-01
Simultaneous optimization of different validity measures can capture different data characteristics of remote sensing imagery (RSI) and thereby achieving high quality classification results. In this paper, two conflicting cluster validity indices, the Xie-Beni (XB) index and the fuzzy C-means (FCM) (Jm) measure, are integrated with a diversity-enhanced and memory-based multi-objective gravitational search algorithm (DMMOGSA) to present a novel multi-objective optimization based RSI classification method. In this method, the Gabor filter method is firstly implemented to extract texture features of RSI. Then, the texture features are syncretized with the spectral features to construct the spatial-spectral feature space/set of the RSI. Afterwards, cluster of the spectral-spatial feature set is carried out on the basis of the proposed method. To be specific, cluster centers are randomly generated initially. After that, the cluster centers are updated and optimized adaptively by employing the DMMOGSA. Accordingly, a set of non-dominated cluster centers are obtained. Therefore, numbers of image classification results of RSI are produced and users can pick up the most promising one according to their problem requirements. To quantitatively and qualitatively validate the effectiveness of the proposed method, the proposed classification method was applied to classifier two aerial high-resolution remote sensing imageries. The obtained classification results are compared with that produced by two single cluster validity index based and two state-of-the-art multi-objective optimization algorithms based classification results. Comparison results show that the proposed method can achieve more accurate RSI classification.
On-demand high-capacity ride-sharing via dynamic trip-vehicle assignment.
Alonso-Mora, Javier; Samaranayake, Samitha; Wallar, Alex; Frazzoli, Emilio; Rus, Daniela
2017-01-17
Ride-sharing services are transforming urban mobility by providing timely and convenient transportation to anybody, anywhere, and anytime. These services present enormous potential for positive societal impacts with respect to pollution, energy consumption, congestion, etc. Current mathematical models, however, do not fully address the potential of ride-sharing. Recently, a large-scale study highlighted some of the benefits of car pooling but was limited to static routes with two riders per vehicle (optimally) or three (with heuristics). We present a more general mathematical model for real-time high-capacity ride-sharing that (i) scales to large numbers of passengers and trips and (ii) dynamically generates optimal routes with respect to online demand and vehicle locations. The algorithm starts from a greedy assignment and improves it through a constrained optimization, quickly returning solutions of good quality and converging to the optimal assignment over time. We quantify experimentally the tradeoff between fleet size, capacity, waiting time, travel delay, and operational costs for low- to medium-capacity vehicles, such as taxis and van shuttles. The algorithm is validated with ∼3 million rides extracted from the New York City taxicab public dataset. Our experimental study considers ride-sharing with rider capacity of up to 10 simultaneous passengers per vehicle. The algorithm applies to fleets of autonomous vehicles and also incorporates rebalancing of idling vehicles to areas of high demand. This framework is general and can be used for many real-time multivehicle, multitask assignment problems.
Jiao, Yong; Zhang, Yu; Wang, Yu; Wang, Bei; Jin, Jing; Wang, Xingyu
2018-05-01
Multiset canonical correlation analysis (MsetCCA) has been successfully applied to optimize the reference signals by extracting common features from multiple sets of electroencephalogram (EEG) for steady-state visual evoked potential (SSVEP) recognition in brain-computer interface application. To avoid extracting the possible noise components as common features, this study proposes a sophisticated extension of MsetCCA, called multilayer correlation maximization (MCM) model for further improving SSVEP recognition accuracy. MCM combines advantages of both CCA and MsetCCA by carrying out three layers of correlation maximization processes. The first layer is to extract the stimulus frequency-related information in using CCA between EEG samples and sine-cosine reference signals. The second layer is to learn reference signals by extracting the common features with MsetCCA. The third layer is to re-optimize the reference signals set in using CCA with sine-cosine reference signals again. Experimental study is implemented to validate effectiveness of the proposed MCM model in comparison with the standard CCA and MsetCCA algorithms. Superior performance of MCM demonstrates its promising potential for the development of an improved SSVEP-based brain-computer interface.
Normalized distance aggregation of discriminative features for person reidentification
NASA Astrophysics Data System (ADS)
Hou, Li; Han, Kang; Wan, Wanggen; Hwang, Jenq-Neng; Yao, Haiyan
2018-03-01
We propose an effective person reidentification method based on normalized distance aggregation of discriminative features. Our framework is built on the integration of three high-performance discriminative feature extraction models, including local maximal occurrence (LOMO), feature fusion net (FFN), and a concatenation of LOMO and FFN called LOMO-FFN, through two fast and discriminant metric learning models, i.e., cross-view quadratic discriminant analysis (XQDA) and large-scale similarity learning (LSSL). More specifically, we first represent all the cross-view person images using LOMO, FFN, and LOMO-FFN, respectively, and then apply each extracted feature representation to train XQDA and LSSL, respectively, to obtain the optimized individual cross-view distance metric. Finally, the cross-view person matching is computed as the sum of the optimized individual cross-view distance metric through the min-max normalization. Experimental results have shown the effectiveness of the proposed algorithm on three challenging datasets (VIPeR, PRID450s, and CUHK01).
Application of quantum-behaved particle swarm optimization to motor imagery EEG classification.
Hsu, Wei-Yen
2013-12-01
In this study, we propose a recognition system for single-trial analysis of motor imagery (MI) electroencephalogram (EEG) data. Applying event-related brain potential (ERP) data acquired from the sensorimotor cortices, the system chiefly consists of automatic artifact elimination, feature extraction, feature selection and classification. In addition to the use of independent component analysis, a similarity measure is proposed to further remove the electrooculographic (EOG) artifacts automatically. Several potential features, such as wavelet-fractal features, are then extracted for subsequent classification. Next, quantum-behaved particle swarm optimization (QPSO) is used to select features from the feature combination. Finally, selected sub-features are classified by support vector machine (SVM). Compared with without artifact elimination, feature selection using a genetic algorithm (GA) and feature classification with Fisher's linear discriminant (FLD) on MI data from two data sets for eight subjects, the results indicate that the proposed method is promising in brain-computer interface (BCI) applications.
Breast Cancer Recognition Using a Novel Hybrid Intelligent Method
Addeh, Jalil; Ebrahimzadeh, Ata
2012-01-01
Breast cancer is the second largest cause of cancer deaths among women. At the same time, it is also among the most curable cancer types if it can be diagnosed early. This paper presents a novel hybrid intelligent method for recognition of breast cancer tumors. The proposed method includes three main modules: the feature extraction module, the classifier module, and the optimization module. In the feature extraction module, fuzzy features are proposed as the efficient characteristic of the patterns. In the classifier module, because of the promising generalization capability of support vector machines (SVM), a SVM-based classifier is proposed. In support vector machine training, the hyperparameters have very important roles for its recognition accuracy. Therefore, in the optimization module, the bees algorithm (BA) is proposed for selecting appropriate parameters of the classifier. The proposed system is tested on Wisconsin Breast Cancer database and simulation results show that the recommended system has a high accuracy. PMID:23626945
Estimating metallicities with isochrone fits to photometric data of open clusters
NASA Astrophysics Data System (ADS)
Monteiro, H.; Oliveira, A. F.; Dias, W. S.; Caetano, T. C.
2014-10-01
The metallicity is a critical parameter that affects the correct determination of stellar cluster's fundamental characteristics and has important implications in Galactic and Stellar evolution research. Fewer than 10% of the 2174 currently catalogued open clusters have their metallicity determined in the literature. In this work we present a method for estimating the metallicity of open clusters via non-subjective isochrone fitting using the cross-entropy global optimization algorithm applied to UBV photometric data. The free parameters distance, reddening, age, and metallicity are simultaneously determined by the fitting method. The fitting procedure uses weights for the observational data based on the estimation of membership likelihood for each star, which considers the observational magnitude limit, the density profile of stars as a function of radius from the center of the cluster, and the density of stars in multi-dimensional magnitude space. We present results of [Fe/H] for well-studied open clusters based on distinct UBV data sets. The [Fe/H] values obtained in the ten cases for which spectroscopic determinations were available in the literature agree, indicating that our method provides a good alternative to estimating [Fe/H] by using an objective isochrone fitting. Our results show that the typical precision is about 0.1 dex.
Bartos, Anthony L; Cipr, Tomas; Nelson, Douglas J; Schwarz, Petr; Banowetz, John; Jerabek, Ladislav
2018-04-01
A method is presented in which conventional speech algorithms are applied, with no modifications, to improve their performance in extremely noisy environments. It has been demonstrated that, for eigen-channel algorithms, pre-training multiple speaker identification (SID) models at a lattice of signal-to-noise-ratio (SNR) levels and then performing SID using the appropriate SNR dependent model was successful in mitigating noise at all SNR levels. In those tests, it was found that SID performance was optimized when the SNR of the testing and training data were close or identical. In this current effort multiple i-vector algorithms were used, greatly improving both processing throughput and equal error rate classification accuracy. Using identical approaches in the same noisy environment, performance of SID, language identification, gender identification, and diarization were significantly improved. A critical factor in this improvement is speech activity detection (SAD) that performs reliably in extremely noisy environments, where the speech itself is barely audible. To optimize SAD operation at all SNR levels, two algorithms were employed. The first maximized detection probability at low levels (-10 dB ≤ SNR < +10 dB) using just the voiced speech envelope, and the second exploited features extracted from the original speech to improve overall accuracy at higher quality levels (SNR ≥ +10 dB).
Ukwatta, Eranga; Yuan, Jing; Qiu, Wu; Rajchl, Martin; Chiu, Bernard; Fenster, Aaron
2015-12-01
Three-dimensional (3D) measurements of peripheral arterial disease (PAD) plaque burden extracted from fast black-blood magnetic resonance (MR) images have shown to be more predictive of clinical outcomes than PAD stenosis measurements. To this end, accurate segmentation of the femoral artery lumen and outer wall is required for generating volumetric measurements of PAD plaque burden. Here, we propose a semi-automated algorithm to jointly segment the femoral artery lumen and outer wall surfaces from 3D black-blood MR images, which are reoriented and reconstructed along the medial axis of the femoral artery to obtain improved spatial coherence between slices of the long, thin femoral artery and to reduce computation time. The developed segmentation algorithm enforces two priors in a global optimization manner: the spatial consistency between the adjacent 2D slices and the anatomical region order between the femoral artery lumen and outer wall surfaces. The formulated combinatorial optimization problem for segmentation is solved globally and exactly by means of convex relaxation using a coupled continuous max-flow (CCMF) model, which is a dual formulation to the convex relaxed optimization problem. In addition, the CCMF model directly derives an efficient duality-based algorithm based on the modern multiplier augmented optimization scheme, which has been implemented on a GPU for fast computation. The computed segmentations from the developed algorithm were compared to manual delineations from experts using 20 black-blood MR images. The developed algorithm yielded both high accuracy (Dice similarity coefficients ≥ 87% for both the lumen and outer wall surfaces) and high reproducibility (intra-class correlation coefficient of 0.95 for generating vessel wall area), while outperforming the state-of-the-art method in terms of computational time by a factor of ≈ 20. Copyright © 2015 Elsevier B.V. All rights reserved.
The Edge-Disjoint Path Problem on Random Graphs by Message-Passing.
Altarelli, Fabrizio; Braunstein, Alfredo; Dall'Asta, Luca; De Bacco, Caterina; Franz, Silvio
2015-01-01
We present a message-passing algorithm to solve a series of edge-disjoint path problems on graphs based on the zero-temperature cavity equations. Edge-disjoint paths problems are important in the general context of routing, that can be defined by incorporating under a unique framework both traffic optimization and total path length minimization. The computation of the cavity equations can be performed efficiently by exploiting a mapping of a generalized edge-disjoint path problem on a star graph onto a weighted maximum matching problem. We perform extensive numerical simulations on random graphs of various types to test the performance both in terms of path length minimization and maximization of the number of accommodated paths. In addition, we test the performance on benchmark instances on various graphs by comparison with state-of-the-art algorithms and results found in the literature. Our message-passing algorithm always outperforms the others in terms of the number of accommodated paths when considering non trivial instances (otherwise it gives the same trivial results). Remarkably, the largest improvement in performance with respect to the other methods employed is found in the case of benchmarks with meshes, where the validity hypothesis behind message-passing is expected to worsen. In these cases, even though the exact message-passing equations do not converge, by introducing a reinforcement parameter to force convergence towards a sub optimal solution, we were able to always outperform the other algorithms with a peak of 27% performance improvement in terms of accommodated paths. On random graphs, we numerically observe two separated regimes: one in which all paths can be accommodated and one in which this is not possible. We also investigate the behavior of both the number of paths to be accommodated and their minimum total length.
The Edge-Disjoint Path Problem on Random Graphs by Message-Passing
2015-01-01
We present a message-passing algorithm to solve a series of edge-disjoint path problems on graphs based on the zero-temperature cavity equations. Edge-disjoint paths problems are important in the general context of routing, that can be defined by incorporating under a unique framework both traffic optimization and total path length minimization. The computation of the cavity equations can be performed efficiently by exploiting a mapping of a generalized edge-disjoint path problem on a star graph onto a weighted maximum matching problem. We perform extensive numerical simulations on random graphs of various types to test the performance both in terms of path length minimization and maximization of the number of accommodated paths. In addition, we test the performance on benchmark instances on various graphs by comparison with state-of-the-art algorithms and results found in the literature. Our message-passing algorithm always outperforms the others in terms of the number of accommodated paths when considering non trivial instances (otherwise it gives the same trivial results). Remarkably, the largest improvement in performance with respect to the other methods employed is found in the case of benchmarks with meshes, where the validity hypothesis behind message-passing is expected to worsen. In these cases, even though the exact message-passing equations do not converge, by introducing a reinforcement parameter to force convergence towards a sub optimal solution, we were able to always outperform the other algorithms with a peak of 27% performance improvement in terms of accommodated paths. On random graphs, we numerically observe two separated regimes: one in which all paths can be accommodated and one in which this is not possible. We also investigate the behavior of both the number of paths to be accommodated and their minimum total length. PMID:26710102
A Joint Optimization Criterion for Blind DS-CDMA Detection
NASA Astrophysics Data System (ADS)
Durán-Díaz, Iván; Cruces-Alvarez, Sergio A.
2006-12-01
This paper addresses the problem of the blind detection of a desired user in an asynchronous DS-CDMA communications system with multipath propagation channels. Starting from the inverse filter criterion introduced by Tugnait and Li in 2001, we propose to tackle the problem in the context of the blind signal extraction methods for ICA. In order to improve the performance of the detector, we present a criterion based on the joint optimization of several higher-order statistics of the outputs. An algorithm that optimizes the proposed criterion is described, and its improved performance and robustness with respect to the near-far problem are corroborated through simulations. Additionally, a simulation using measurements on a real software-radio platform at 5 GHz has also been performed.
NASA Astrophysics Data System (ADS)
Martinez, Raquel; Kraus, Adam L.
2017-06-01
Over the past decade, a growing population of planetary-mass companions (< 20 MJup PMCs) orbiting young stars have been discovered. These objects are at wide separations (> 100 AU) from their host stars, challenging existing models of both star and planet formation. It is unclear whether these systems represent the low-mass extreme of stellar binary formation or the high-mass and wide-orbit extreme of planet formation theories, as various proposed formation pathways inadequately explain the physical and orbital aspects of these systems. Even so, determining which scenario best reproduces the observed characteristics of the PMCs will come once a statistically robust sample of directly-imaged PMCs are found and studied.We are developing an automated pipeline to search for wide-orbit PMCs to young stars in Spitzer/IRAC images. A Markov Chain Monte Carlo (MCMC) algorithm is the backbone of our novel point spread function (PSF) subtraction routine that efficiently creates and subtracts χ2-minimizing instrumental PSFs, simultaneously measuring astrometry and infrared photometry of these systems across the four IRAC channels (3.6 μm, 4.5 μm, 5.8 μm, and 8 μm). In this work, we present the results of a Spitzer/IRAC archival imaging study of 11 young, low-mass (0.044-0.88 M⊙ K3.5-M7.5) stars known to have faint, low-mass companions in 3 nearby star-forming regions (Chameleon, Taurus, and Upper Scorpius). We characterize the systems found to have low-mass companions with non-zero [I1] - [I4] colors, potentially signifying the presence of a circum(sub?)stellar disk. Plans for future pipeline improvements and paths forward will also be discussed. Once this computational foundation is optimized, the stage is set to quickly scour the nearby star-forming regions already imaged by Spitzer, identify potential candidates for further characterization with ground- or space-based telescopes, and increase the number of widely-separated PMCs known.
Neutron stars structure in the context of massive gravity
NASA Astrophysics Data System (ADS)
Hendi, S. H.; Bordbar, G. H.; Eslam Panah, B.; Panahiyan, S.
2017-07-01
Motivated by the recent interests in spin-2 massive gravitons, we study the structure of neutron star in the context of massive gravity. The modifications of TOV equation in the presence of massive gravity are explored in 4 and higher dimensions. Next, by considering the modern equation of state for the neutron star matter (which is extracted by the lowest order constrained variational (LOCV) method with the AV18 potential), different physical properties of the neutron star (such as Le Chatelier's principle, stability and energy conditions) are investigated. It is shown that consideration of the massive gravity has specific contributions into the structure of neutron star and introduces new prescriptions for the massive astrophysical objects. The mass-radius relation is examined and the effects of massive gravity on the Schwarzschild radius, average density, compactness, gravitational redshift and dynamical stability are studied. Finally, a relation between mass and radius of neutron star versus the Planck mass is extracted.
PSO-SVM-Based Online Locomotion Mode Identification for Rehabilitation Robotic Exoskeletons.
Long, Yi; Du, Zhi-Jiang; Wang, Wei-Dong; Zhao, Guang-Yu; Xu, Guo-Qiang; He, Long; Mao, Xi-Wang; Dong, Wei
2016-09-02
Locomotion mode identification is essential for the control of a robotic rehabilitation exoskeletons. This paper proposes an online support vector machine (SVM) optimized by particle swarm optimization (PSO) to identify different locomotion modes to realize a smooth and automatic locomotion transition. A PSO algorithm is used to obtain the optimal parameters of SVM for a better overall performance. Signals measured by the foot pressure sensors integrated in the insoles of wearable shoes and the MEMS-based attitude and heading reference systems (AHRS) attached on the shoes and shanks of leg segments are fused together as the input information of SVM. Based on the chosen window whose size is 200 ms (with sampling frequency of 40 Hz), a three-layer wavelet packet analysis (WPA) is used for feature extraction, after which, the kernel principal component analysis (kPCA) is utilized to reduce the dimension of the feature set to reduce computation cost of the SVM. Since the signals are from two types of different sensors, the normalization is conducted to scale the input into the interval of [0, 1]. Five-fold cross validation is adapted to train the classifier, which prevents the classifier over-fitting. Based on the SVM model obtained offline in MATLAB, an online SVM algorithm is constructed for locomotion mode identification. Experiments are performed for different locomotion modes and experimental results show the effectiveness of the proposed algorithm with an accuracy of 96.00% ± 2.45%. To improve its accuracy, majority vote algorithm (MVA) is used for post-processing, with which the identification accuracy is better than 98.35% ± 1.65%. The proposed algorithm can be extended and employed in the field of robotic rehabilitation and assistance.
PSO-SVM-Based Online Locomotion Mode Identification for Rehabilitation Robotic Exoskeletons
Long, Yi; Du, Zhi-Jiang; Wang, Wei-Dong; Zhao, Guang-Yu; Xu, Guo-Qiang; He, Long; Mao, Xi-Wang; Dong, Wei
2016-01-01
Locomotion mode identification is essential for the control of a robotic rehabilitation exoskeletons. This paper proposes an online support vector machine (SVM) optimized by particle swarm optimization (PSO) to identify different locomotion modes to realize a smooth and automatic locomotion transition. A PSO algorithm is used to obtain the optimal parameters of SVM for a better overall performance. Signals measured by the foot pressure sensors integrated in the insoles of wearable shoes and the MEMS-based attitude and heading reference systems (AHRS) attached on the shoes and shanks of leg segments are fused together as the input information of SVM. Based on the chosen window whose size is 200 ms (with sampling frequency of 40 Hz), a three-layer wavelet packet analysis (WPA) is used for feature extraction, after which, the kernel principal component analysis (kPCA) is utilized to reduce the dimension of the feature set to reduce computation cost of the SVM. Since the signals are from two types of different sensors, the normalization is conducted to scale the input into the interval of [0, 1]. Five-fold cross validation is adapted to train the classifier, which prevents the classifier over-fitting. Based on the SVM model obtained offline in MATLAB, an online SVM algorithm is constructed for locomotion mode identification. Experiments are performed for different locomotion modes and experimental results show the effectiveness of the proposed algorithm with an accuracy of 96.00% ± 2.45%. To improve its accuracy, majority vote algorithm (MVA) is used for post-processing, with which the identification accuracy is better than 98.35% ± 1.65%. The proposed algorithm can be extended and employed in the field of robotic rehabilitation and assistance. PMID:27598160
Hearing through the noise: Biologically inspired noise reduction
NASA Astrophysics Data System (ADS)
Lee, Tyler Paul
Vocal communication in the natural world demands that a listener perform a remarkably complicated task in real-time. Vocalizations mix with all other sounds in the environment as they travel to the listener, arriving as a jumbled low-dimensional signal. A listener must then use this signal to extract the structure corresponding to individual sound sources. How this computation is implemented in the brain remains poorly understood, yet an accurate description of such mechanisms would impact a variety of medical and technological applications of sound processing. In this thesis, I describe initial work on how neurons in the secondary auditory cortex of the Zebra Finch extract song from naturalistic background noise. I then build on our understanding of the function of these neurons by creating an algorithm that extracts speech from natural background noise using spectrotemporal modulations. The algorithm, implemented as an artificial neural network, can be flexibly applied to any class of signal or noise and performs better than an optimal frequency-based noise reduction algorithm for a variety of background noises and signal-to-noise ratios. One potential drawback to using spectrotemporal modulations for noise reduction, though, is that analyzing the modulations present in an ongoing sound requires a latency set by the slowest temporal modulation computed. The algorithm avoids this problem by reducing noise predictively, taking advantage of the large amount of temporal structure present in natural sounds. This predictive denoising has ties to recent work suggesting that the auditory system uses attention to focus on predicted regions of spectrotemporal space when performing auditory scene analysis.
Douglas, P K; Harris, Sam; Yuille, Alan; Cohen, Mark S
2011-05-15
Machine learning (ML) has become a popular tool for mining functional neuroimaging data, and there are now hopes of performing such analyses efficiently in real-time. Towards this goal, we compared accuracy of six different ML algorithms applied to neuroimaging data of persons engaged in a bivariate task, asserting their belief or disbelief of a variety of propositional statements. We performed unsupervised dimension reduction and automated feature extraction using independent component (IC) analysis and extracted IC time courses. Optimization of classification hyperparameters across each classifier occurred prior to assessment. Maximum accuracy was achieved at 92% for Random Forest, followed by 91% for AdaBoost, 89% for Naïve Bayes, 87% for a J48 decision tree, 86% for K*, and 84% for support vector machine. For real-time decoding applications, finding a parsimonious subset of diagnostic ICs might be useful. We used a forward search technique to sequentially add ranked ICs to the feature subspace. For the current data set, we determined that approximately six ICs represented a meaningful basis set for classification. We then projected these six IC spatial maps forward onto a later scanning session within subject. We then applied the optimized ML algorithms to these new data instances, and found that classification accuracy results were reproducible. Additionally, we compared our classification method to our previously published general linear model results on this same data set. The highest ranked IC spatial maps show similarity to brain regions associated with contrasts for belief > disbelief, and disbelief < belief. Copyright © 2010 Elsevier Inc. All rights reserved.
A novel method for overlapping community detection using Multi-objective optimization
NASA Astrophysics Data System (ADS)
Ebrahimi, Morteza; Shahmoradi, Mohammad Reza; Heshmati, Zainabolhoda; Salehi, Mostafa
2018-09-01
The problem of community detection as one of the most important applications of network science can be addressed effectively by multi-objective optimization. In this paper, we aim to present a novel efficient method based on this approach. Also, in this study the idea of using all Pareto fronts to detect overlapping communities is introduced. The proposed method has two main advantages compared to other multi-objective optimization based approaches. The first advantage is scalability, and the second is the ability to find overlapping communities. Despite most of the works, the proposed method is able to find overlapping communities effectively. The new algorithm works by extracting appropriate communities from all the Pareto optimal solutions, instead of choosing the one optimal solution. Empirical experiments on different features of separated and overlapping communities, on both synthetic and real networks show that the proposed method performs better in comparison with other methods.
A Bayesian analysis of HAT-P-7b using the EXONEST algorithm
DOE Office of Scientific and Technical Information (OSTI.GOV)
Placek, Ben; Knuth, Kevin H.
2015-01-13
The study of exoplanets (planets orbiting other stars) is revolutionizing the way we view our universe. High-precision photometric data provided by the Kepler Space Telescope (Kepler) enables not only the detection of such planets, but also their characterization. This presents a unique opportunity to apply Bayesian methods to better characterize the multitude of previously confirmed exoplanets. This paper focuses on applying the EXONEST algorithm to characterize the transiting short-period-hot-Jupiter, HAT-P-7b (also referred to as Kepler-2b). EXONEST evaluates a suite of exoplanet photometric models by applying Bayesian Model Selection, which is implemented with the MultiNest algorithm. These models take into accountmore » planetary effects, such as reflected light and thermal emissions, as well as the effect of the planetary motion on the host star, such as Doppler beaming, or boosting, of light from the reflex motion of the host star, and photometric variations due to the planet-induced ellipsoidal shape of the host star. By calculating model evidences, one can determine which model best describes the observed data, thus identifying which effects dominate the planetary system. Presented are parameter estimates and model evidences for HAT-P-7b.« less
Robust extrema features for time-series data analysis.
Vemulapalli, Pramod K; Monga, Vishal; Brennan, Sean N
2013-06-01
The extraction of robust features for comparing and analyzing time series is a fundamentally important problem. Research efforts in this area encompass dimensionality reduction using popular signal analysis tools such as the discrete Fourier and wavelet transforms, various distance metrics, and the extraction of interest points from time series. Recently, extrema features for analysis of time-series data have assumed increasing significance because of their natural robustness under a variety of practical distortions, their economy of representation, and their computational benefits. Invariably, the process of encoding extrema features is preceded by filtering of the time series with an intuitively motivated filter (e.g., for smoothing), and subsequent thresholding to identify robust extrema. We define the properties of robustness, uniqueness, and cardinality as a means to identify the design choices available in each step of the feature generation process. Unlike existing methods, which utilize filters "inspired" from either domain knowledge or intuition, we explicitly optimize the filter based on training time series to optimize robustness of the extracted extrema features. We demonstrate further that the underlying filter optimization problem reduces to an eigenvalue problem and has a tractable solution. An encoding technique that enhances control over cardinality and uniqueness is also presented. Experimental results obtained for the problem of time series subsequence matching establish the merits of the proposed algorithm.
A robust fingerprint matching algorithm based on compatibility of star structures
NASA Astrophysics Data System (ADS)
Cao, Jia; Feng, Jufu
2009-10-01
In fingerprint verification or identification systems, most minutiae-based matching algorithms suffered from the problems of non-linear distortion and missing or faking minutiae. Local structures such as triangle or k-nearest structure are widely used to reduce the impact of non-linear distortion, but are suffered from missing and faking minutiae. In our proposed method, star structure is used to present local structure. A star structure contains various number of minutiae, thus, it is more robust with missing and faking minutiae. Our method consists of four steps: 1) Constructing star structures at minutia level; 2) Computing similarity score for each structure pair, and eliminating impostor matched pairs which have the low scores. As it is generally assumed that there is only linear distortion in local area, the similarity is defined by rotation and shifting. 3) Voting for remained matched pairs according to the compatibility between them, and eliminating impostor matched pairs which gain few votes. The concept of compatibility is first introduced by Yansong Feng [4], the original definition is only based on triangles. We define the compatibility for star structures to adjust to our proposed algorithm. 4) Computing the matching score, based on the number of matched structures and their voting scores. The score also reflects the fact that, it should get higher score if minutiae match in more intensive areas. Experiments evaluated on FVC 2004 show both effectiveness and efficiency of our methods.
A Survey of Distributed Optimization and Control Algorithms for Electric Power Systems
Molzahn, Daniel K.; Dorfler, Florian K.; Sandberg, Henrik; ...
2017-07-25
Historically, centrally computed algorithms have been the primary means of power system optimization and control. With increasing penetrations of distributed energy resources requiring optimization and control of power systems with many controllable devices, distributed algorithms have been the subject of significant research interest. Here, this paper surveys the literature of distributed algorithms with applications to optimization and control of power systems. In particular, this paper reviews distributed algorithms for offline solution of optimal power flow (OPF) problems as well as online algorithms for real-time solution of OPF, optimal frequency control, optimal voltage control, and optimal wide-area control problems.
A Survey of Distributed Optimization and Control Algorithms for Electric Power Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Molzahn, Daniel K.; Dorfler, Florian K.; Sandberg, Henrik
Historically, centrally computed algorithms have been the primary means of power system optimization and control. With increasing penetrations of distributed energy resources requiring optimization and control of power systems with many controllable devices, distributed algorithms have been the subject of significant research interest. Here, this paper surveys the literature of distributed algorithms with applications to optimization and control of power systems. In particular, this paper reviews distributed algorithms for offline solution of optimal power flow (OPF) problems as well as online algorithms for real-time solution of OPF, optimal frequency control, optimal voltage control, and optimal wide-area control problems.
VizieR Online Data Catalog: Outliers and similarity in APOGEE (Reis+, 2018)
NASA Astrophysics Data System (ADS)
Reis, I.; Poznanski, D.; Baron, D.; Zasowski, G.; Shahaf, S.
2017-11-01
t-SNE is a dimensionality reduction algorithm that is particularly well suited for the visualization of high-dimensional datasets. We use t-SNE to visualize our distance matrix. A-priori, these distances could define a space with almost as many dimensions as objects, i.e., tens of thousand of dimensions. Obviously, since many stars are quite similar, and their spectra are defined by a few physical parameters, the minimal spanning space might be smaller. By using t-SNE we can examine the structure of our sample projected into 2D. We use our distance matrix as input to the t-SNE algorithm and in return get a 2D map of the objects in our dataset. For each star in a sample of 183232 APOGEE stars, the APOGEE IDs of the 99 stars with most similar spectra (according to the method described in paper), ordered by similarity. (3 data files).
Infinitesimal Deformations of a Formal Symplectic Groupoid
NASA Astrophysics Data System (ADS)
Karabegov, Alexander
2011-09-01
Given a formal symplectic groupoid G over a Poisson manifold ( M, π 0), we define a new object, an infinitesimal deformation of G, which can be thought of as a formal symplectic groupoid over the manifold M equipped with an infinitesimal deformation {π_0 + \\varepsilon π_1} of the Poisson bivector field π 0. To any pair of natural star products {(ast,tildeast)} having the same formal symplectic groupoid G we relate an infinitesimal deformation of G. We call it the deformation groupoid of the pair {(ast,tildeast)} . To each star product with separation of variables {ast} on a Kähler-Poisson manifold M we relate another star product with separation of variables {hatast} on M. We build an algorithm for calculating the principal symbols of the components of the logarithm of the formal Berezin transform of a star product with separation of variables {ast} . This algorithm is based upon the deformation groupoid of the pair {(ast,hatast)}.
Maximum wind energy extraction strategies using power electronic converters
NASA Astrophysics Data System (ADS)
Wang, Quincy Qing
2003-10-01
This thesis focuses on maximum wind energy extraction strategies for achieving the highest energy output of variable speed wind turbine power generation systems. Power electronic converters and controls provide the basic platform to accomplish the research of this thesis in both hardware and software aspects. In order to send wind energy to a utility grid, a variable speed wind turbine requires a power electronic converter to convert a variable voltage variable frequency source into a fixed voltage fixed frequency supply. Generic single-phase and three-phase converter topologies, converter control methods for wind power generation, as well as the developed direct drive generator, are introduced in the thesis for establishing variable-speed wind energy conversion systems. Variable speed wind power generation system modeling and simulation are essential methods both for understanding the system behavior and for developing advanced system control strategies. Wind generation system components, including wind turbine, 1-phase IGBT inverter, 3-phase IGBT inverter, synchronous generator, and rectifier, are modeled in this thesis using MATLAB/SIMULINK. The simulation results have been verified by a commercial simulation software package, PSIM, and confirmed by field test results. Since the dynamic time constants for these individual models are much different, a creative approach has also been developed in this thesis to combine these models for entire wind power generation system simulation. An advanced maximum wind energy extraction strategy relies not only on proper system hardware design, but also on sophisticated software control algorithms. Based on literature review and computer simulation on wind turbine control algorithms, an intelligent maximum wind energy extraction control algorithm is proposed in this thesis. This algorithm has a unique on-line adaptation and optimization capability, which is able to achieve maximum wind energy conversion efficiency through continuously improving the performance of wind power generation systems. This algorithm is independent of wind power generation system characteristics, and does not need wind speed and turbine speed measurements. Therefore, it can be easily implemented into various wind energy generation systems with different turbine inertia and diverse system hardware environments. In addition to the detailed description of the proposed algorithm, computer simulation results are presented in the thesis to demonstrate the advantage of this algorithm. As a final confirmation of the algorithm feasibility, the algorithm has been implemented inside a single-phase IGBT inverter, and tested with a wind simulator system in research laboratory. Test results were found consistent with the simulation results. (Abstract shortened by UMI.)
Incremental fuzzy C medoids clustering of time series data using dynamic time warping distance
Chen, Jingli; Wu, Shuai; Liu, Zhizhong; Chao, Hao
2018-01-01
Clustering time series data is of great significance since it could extract meaningful statistics and other characteristics. Especially in biomedical engineering, outstanding clustering algorithms for time series may help improve the health level of people. Considering data scale and time shifts of time series, in this paper, we introduce two incremental fuzzy clustering algorithms based on a Dynamic Time Warping (DTW) distance. For recruiting Single-Pass and Online patterns, our algorithms could handle large-scale time series data by splitting it into a set of chunks which are processed sequentially. Besides, our algorithms select DTW to measure distance of pair-wise time series and encourage higher clustering accuracy because DTW could determine an optimal match between any two time series by stretching or compressing segments of temporal data. Our new algorithms are compared to some existing prominent incremental fuzzy clustering algorithms on 12 benchmark time series datasets. The experimental results show that the proposed approaches could yield high quality clusters and were better than all the competitors in terms of clustering accuracy. PMID:29795600
Incremental fuzzy C medoids clustering of time series data using dynamic time warping distance.
Liu, Yongli; Chen, Jingli; Wu, Shuai; Liu, Zhizhong; Chao, Hao
2018-01-01
Clustering time series data is of great significance since it could extract meaningful statistics and other characteristics. Especially in biomedical engineering, outstanding clustering algorithms for time series may help improve the health level of people. Considering data scale and time shifts of time series, in this paper, we introduce two incremental fuzzy clustering algorithms based on a Dynamic Time Warping (DTW) distance. For recruiting Single-Pass and Online patterns, our algorithms could handle large-scale time series data by splitting it into a set of chunks which are processed sequentially. Besides, our algorithms select DTW to measure distance of pair-wise time series and encourage higher clustering accuracy because DTW could determine an optimal match between any two time series by stretching or compressing segments of temporal data. Our new algorithms are compared to some existing prominent incremental fuzzy clustering algorithms on 12 benchmark time series datasets. The experimental results show that the proposed approaches could yield high quality clusters and were better than all the competitors in terms of clustering accuracy.
A Fast Robot Identification and Mapping Algorithm Based on Kinect Sensor.
Zhang, Liang; Shen, Peiyi; Zhu, Guangming; Wei, Wei; Song, Houbing
2015-08-14
Internet of Things (IoT) is driving innovation in an ever-growing set of application domains such as intelligent processing for autonomous robots. For an autonomous robot, one grand challenge is how to sense its surrounding environment effectively. The Simultaneous Localization and Mapping with RGB-D Kinect camera sensor on robot, called RGB-D SLAM, has been developed for this purpose but some technical challenges must be addressed. Firstly, the efficiency of the algorithm cannot satisfy real-time requirements; secondly, the accuracy of the algorithm is unacceptable. In order to address these challenges, this paper proposes a set of novel improvement methods as follows. Firstly, the ORiented Brief (ORB) method is used in feature detection and descriptor extraction. Secondly, a bidirectional Fast Library for Approximate Nearest Neighbors (FLANN) k-Nearest Neighbor (KNN) algorithm is applied to feature match. Then, the improved RANdom SAmple Consensus (RANSAC) estimation method is adopted in the motion transformation. In the meantime, high precision General Iterative Closest Points (GICP) is utilized to register a point cloud in the motion transformation optimization. To improve the accuracy of SLAM, the reduced dynamic covariance scaling (DCS) algorithm is formulated as a global optimization problem under the G2O framework. The effectiveness of the improved algorithm has been verified by testing on standard data and comparing with the ground truth obtained on Freiburg University's datasets. The Dr Robot X80 equipped with a Kinect camera is also applied in a building corridor to verify the correctness of the improved RGB-D SLAM algorithm. With the above experiments, it can be seen that the proposed algorithm achieves higher processing speed and better accuracy.
Algorithm for astronomical, point source, signal to noise ratio calculations
NASA Technical Reports Server (NTRS)
Jayroe, R. R.; Schroeder, D. J.
1984-01-01
An algorithm was developed to simulate the expected signal to noise ratios as a function of observation time in the charge coupled device detector plane of an optical telescope located outside the Earth's atmosphere for a signal star, and an optional secondary star, embedded in a uniform cosmic background. By choosing the appropriate input values, the expected point source signal to noise ratio can be computed for the Hubble Space Telescope using the Wide Field/Planetary Camera science instrument.
NASA Astrophysics Data System (ADS)
Jaranowski, Piotr; Królak, Andrzej
2000-03-01
We develop the analytic and numerical tools for data analysis of the continuous gravitational-wave signals from spinning neutron stars for ground-based laser interferometric detectors. The statistical data analysis method that we investigate is maximum likelihood detection which for the case of Gaussian noise reduces to matched filtering. We study in detail the statistical properties of the optimum functional that needs to be calculated in order to detect the gravitational-wave signal and estimate its parameters. We find it particularly useful to divide the parameter space into elementary cells such that the values of the optimal functional are statistically independent in different cells. We derive formulas for false alarm and detection probabilities both for the optimal and the suboptimal filters. We assess the computational requirements needed to do the signal search. We compare a number of criteria to build sufficiently accurate templates for our data analysis scheme. We verify the validity of our concepts and formulas by means of the Monte Carlo simulations. We present algorithms by which one can estimate the parameters of the continuous signals accurately. We find, confirming earlier work of other authors, that given a 100 Gflops computational power an all-sky search for observation time of 7 days and directed search for observation time of 120 days are possible whereas an all-sky search for 120 days of observation time is computationally prohibitive.
Warehouse stocking optimization based on dynamic ant colony genetic algorithm
NASA Astrophysics Data System (ADS)
Xiao, Xiaoxu
2018-04-01
In view of the various orders of FAW (First Automotive Works) International Logistics Co., Ltd., the SLP method is used to optimize the layout of the warehousing units in the enterprise, thus the warehouse logistics is optimized and the external processing speed of the order is improved. In addition, the relevant intelligent algorithms for optimizing the stocking route problem are analyzed. The ant colony algorithm and genetic algorithm which have good applicability are emphatically studied. The parameters of ant colony algorithm are optimized by genetic algorithm, which improves the performance of ant colony algorithm. A typical path optimization problem model is taken as an example to prove the effectiveness of parameter optimization.
Celik, Yuksel; Ulker, Erkan
2013-01-01
Marriage in honey bees optimization (MBO) is a metaheuristic optimization algorithm developed by inspiration of the mating and fertilization process of honey bees and is a kind of swarm intelligence optimizations. In this study we propose improved marriage in honey bees optimization (IMBO) by adding Levy flight algorithm for queen mating flight and neighboring for worker drone improving. The IMBO algorithm's performance and its success are tested on the well-known six unconstrained test functions and compared with other metaheuristic optimization algorithms.
Evaluation of centroiding algorithm error for Nano-JASMINE
NASA Astrophysics Data System (ADS)
Hara, Takuji; Gouda, Naoteru; Yano, Taihei; Yamada, Yoshiyuki
2014-08-01
The Nano-JASMINE mission has been designed to perform absolute astrometric measurements with unprecedented accuracy; the end-of-mission parallax standard error is required to be of the order of 3 milli arc seconds for stars brighter than 7.5 mag in the zw-band(0.6μm-1.0μm) .These requirements set a stringent constraint on the accuracy of the estimation of the location of the stellar image on the CCD for each observation. However each stellar images have individual shape depend on the spectral energy distribution of the star, the CCD properties, and the optics and its associated wave front errors. So it is necessity that the centroiding algorithm performs a high accuracy in any observables. Referring to the study of Gaia, we use LSF fitting method for centroiding algorithm, and investigate systematic error of the algorithm for Nano-JASMINE. Furthermore, we found to improve the algorithm by restricting sample LSF when we use a Principle Component Analysis. We show that centroiding algorithm error decrease after adapted the method.
Peters, Sanne A. E.; Jones, Alexandra; Crino, Michelle; Taylor, Fraser; Woodward, Mark; Neal, Bruce
2017-01-01
Background: The Health Star Rating (HSR) is an interpretive front-of-pack labelling system that rates the overall nutritional profile of packaged foods. The algorithm underpinning the HSR includes total sugar content as one of the components. This has been criticised because intrinsic sugars naturally present in dairy, fruits, and vegetables are treated the same as sugars added during food processing. We assessed whether the HSR could better discriminate between core and discretionary foods by including added sugar in the underlying algorithm. Methods: Nutrition information was extracted for 34,135 packaged foods available in The George Institute’s Australian FoodSwitch database. Added sugar levels were imputed from food composition databases. Products were classified as ‘core’ or ‘discretionary’ based on the Australian Dietary Guidelines. The ability of each of the nutrients included in the HSR algorithm, as well as added sugar, to discriminate between core and discretionary foods was estimated using the area under the curve (AUC). Results: 15,965 core and 18,350 discretionary foods were included. Of these, 8230 (52%) core foods and 15,947 (87%) discretionary foods contained added sugar. Median (Q1, Q3) HSRs were 4.0 (3.0, 4.5) for core foods and 2.0 (1.0, 3.0) for discretionary foods. Median added sugar contents (g/100 g) were 3.3 (1.5, 5.5) for core foods and 14.6 (1.8, 37.2) for discretionary foods. Of all the nutrients used in the current HSR algorithm, total sugar had the greatest individual capacity to discriminate between core and discretionary foods; AUC 0.692 (0.686; 0.697). Added sugar alone achieved an AUC of 0.777 (0.772; 0.782). A model with all nutrients in the current HSR algorithm had an AUC of 0.817 (0.812; 0.821), which increased to 0.871 (0.867; 0.874) with inclusion of added sugar. Conclusion: The HSR nutrients discriminate well between core and discretionary packaged foods. However, discrimination was improved when added sugar was also included. These data argue for inclusion of added sugar in an updated HSR algorithm and declaration of added sugar as part of mandatory nutrient declarations. PMID:28678187
scarlet: Source separation in multi-band images by Constrained Matrix Factorization
NASA Astrophysics Data System (ADS)
Melchior, Peter; Moolekamp, Fred; Jerdee, Maximilian; Armstrong, Robert; Sun, Ai-Lei; Bosch, James; Lupton, Robert
2018-03-01
SCARLET performs source separation (aka "deblending") on multi-band images. It is geared towards optical astronomy, where scenes are composed of stars and galaxies, but it is straightforward to apply it to other imaging data. Separation is achieved through a constrained matrix factorization, which models each source with a Spectral Energy Distribution (SED) and a non-parametric morphology, or multiple such components per source. The code performs forced photometry (with PSF matching if needed) using an optimal weight function given by the signal-to-noise weighted morphology across bands. The approach works well if the sources in the scene have different colors and can be further strengthened by imposing various additional constraints/priors on each source. Because of its generic utility, this package provides a stand-alone implementation that contains the core components of the source separation algorithm. However, the development of this package is part of the LSST Science Pipeline; the meas_deblender package contains a wrapper to implement the algorithms here for the LSST stack.
Visualization of Pulsar Search Data
NASA Astrophysics Data System (ADS)
Foster, R. S.; Wolszczan, A.
1993-05-01
The search for periodic signals from rotating neutron stars or pulsars has been a computationally taxing problem to astronomers for more than twenty-five years. Over this time interval, increases in computational capability have allowed ever more sensitive searches, covering a larger parameter space. The volume of input data and the general presence of radio frequency interference typically produce numerous spurious signals. Visualization of the search output and enhanced real-time processing of significant candidate events allow the pulsar searcher to optimally processes and search for new radio pulsars. The pulsar search algorithm and visualization system presented in this paper currently runs on serial RISC based workstations, a traditional vector based super computer, and a massively parallel computer. A description of the serial software algorithm and its modifications for massively parallel computing are describe. The results of four successive searches for millisecond period radio pulsars using the Arecibo telescope at 430 MHz have resulted in the successful detection of new long-period and millisecond period radio pulsars.
A Compact VLSI System for Bio-Inspired Visual Motion Estimation.
Shi, Cong; Luo, Gang
2018-04-01
This paper proposes a bio-inspired visual motion estimation algorithm based on motion energy, along with its compact very-large-scale integration (VLSI) architecture using low-cost embedded systems. The algorithm mimics motion perception functions of retina, V1, and MT neurons in a primate visual system. It involves operations of ternary edge extraction, spatiotemporal filtering, motion energy extraction, and velocity integration. Moreover, we propose the concept of confidence map to indicate the reliability of estimation results on each probing location. Our algorithm involves only additions and multiplications during runtime, which is suitable for low-cost hardware implementation. The proposed VLSI architecture employs multiple (frame, pixel, and operation) levels of pipeline and massively parallel processing arrays to boost the system performance. The array unit circuits are optimized to minimize hardware resource consumption. We have prototyped the proposed architecture on a low-cost field-programmable gate array platform (Zynq 7020) running at 53-MHz clock frequency. It achieved 30-frame/s real-time performance for velocity estimation on 160 × 120 probing locations. A comprehensive evaluation experiment showed that the estimated velocity by our prototype has relatively small errors (average endpoint error < 0.5 pixel and angular error < 10°) for most motion cases.
Application of a fast skyline computation algorithm for serendipitous searching problems
NASA Astrophysics Data System (ADS)
Koizumi, Kenichi; Hiraki, Kei; Inaba, Mary
2018-02-01
Skyline computation is a method of extracting interesting entries from a large population with multiple attributes. These entries, called skyline or Pareto optimal entries, are known to have extreme characteristics that cannot be found by outlier detection methods. Skyline computation is an important task for characterizing large amounts of data and selecting interesting entries with extreme features. When the population changes dynamically, the task of calculating a sequence of skyline sets is called continuous skyline computation. This task is known to be difficult to perform for the following reasons: (1) information of non-skyline entries must be stored since they may join the skyline in the future; (2) the appearance or disappearance of even a single entry can change the skyline drastically; (3) it is difficult to adopt a geometric acceleration algorithm for skyline computation tasks with high-dimensional datasets. Our new algorithm called jointed rooted-tree (JR-tree) manages entries using a rooted tree structure. JR-tree delays extend the tree to deep levels to accelerate tree construction and traversal. In this study, we presented the difficulties in extracting entries tagged with a rare label in high-dimensional space and the potential of fast skyline computation in low-latency cell identification technology.
Particle swarm optimization of the sensitivity of a cryogenic gravitational wave detector
NASA Astrophysics Data System (ADS)
Michimura, Yuta; Komori, Kentaro; Nishizawa, Atsushi; Takeda, Hiroki; Nagano, Koji; Enomoto, Yutaro; Hayama, Kazuhiro; Somiya, Kentaro; Ando, Masaki
2018-06-01
Cryogenic cooling of the test masses of interferometric gravitational wave detectors is a promising way to reduce thermal noise. However, cryogenic cooling limits the incident power to the test masses, which limits the freedom of shaping the quantum noise. Cryogenic cooling also requires short and thick suspension fibers to extract heat, which could result in the worsening of thermal noise. Therefore, careful tuning of multiple parameters is necessary in designing the sensitivity of cryogenic gravitational wave detectors. Here, we propose the use of particle swarm optimization to optimize the parameters of these detectors. We apply it for designing the sensitivity of the KAGRA detector, and show that binary neutron star inspiral range can be improved by 10%, just by retuning seven parameters of existing components. We also show that the sky localization of GW170817-like binaries can be further improved by a factor of 1.6 averaged across the sky. Our results show that particle swarm optimization is useful for designing future gravitational wave detectors with higher dimensionality in the parameter space.
NASA Astrophysics Data System (ADS)
Zhou, Tingting; Gu, Lingjia; Ren, Ruizhi; Cao, Qiong
2016-09-01
With the rapid development of remote sensing technology, the spatial resolution and temporal resolution of satellite imagery also have a huge increase. Meanwhile, High-spatial-resolution images are becoming increasingly popular for commercial applications. The remote sensing image technology has broad application prospects in intelligent traffic. Compared with traditional traffic information collection methods, vehicle information extraction using high-resolution remote sensing image has the advantages of high resolution and wide coverage. This has great guiding significance to urban planning, transportation management, travel route choice and so on. Firstly, this paper preprocessed the acquired high-resolution multi-spectral and panchromatic remote sensing images. After that, on the one hand, in order to get the optimal thresholding for image segmentation, histogram equalization and linear enhancement technologies were applied into the preprocessing results. On the other hand, considering distribution characteristics of road, the normalized difference vegetation index (NDVI) and normalized difference water index (NDWI) were used to suppress water and vegetation information of preprocessing results. Then, the above two processing result were combined. Finally, the geometric characteristics were used to completed road information extraction. The road vector extracted was used to limit the target vehicle area. Target vehicle extraction was divided into bright vehicles extraction and dark vehicles extraction. Eventually, the extraction results of the two kinds of vehicles were combined to get the final results. The experiment results demonstrated that the proposed algorithm has a high precision for the vehicle information extraction for different high resolution remote sensing images. Among these results, the average fault detection rate was about 5.36%, the average residual rate was about 13.60% and the average accuracy was approximately 91.26%.
Autonomous In-Situ Resources Prospector
NASA Technical Reports Server (NTRS)
Dissly, R. W.; Buehler, M. G.; Schaap, M. G.; Nicks, D.; Taylor, G. J.; Castano, R.; Suarez, D.
2004-01-01
This presentation will describe the concept of an autonomous, intelligent, rover-based rapid surveying system to identify and map several key lunar resources to optimize their ISRU (In Situ Resource Utilization) extraction potential. Prior to an extraction phase for any target resource, ground-based surveys are needed to provide confirmation of remote observation, to quantify and map their 3-D distribution, and to locate optimal extraction sites (e.g. ore bodies) with precision to maximize their economic benefit. The system will search for and quantify optimal minerals for oxygen production feedstock, water ice, and high glass-content regolith that can be used for building materials. These are targeted because of their utility and because they are, or are likely to be, variable in quantity over spatial scales accessible to a rover (i.e., few km). Oxygen has benefits for life support systems and as an oxidizer for propellants. Water is a key resource for sustainable exploration, with utility for life support, propellants, and other industrial processes. High glass-content regolith has utility as a feedstock for building materials as it readily sinters upon heating into a cohesive matrix more readily than other regolith materials or crystalline basalts. Lunar glasses are also a potential feedstock for oxygen production, as many are rich in iron and titanium oxides that are optimal for oxygen extraction. To accomplish this task, a system of sensors and decision-making algorithms for an autonomous prospecting rover is described. One set of sensors will be located in the wheel tread of the robotic search vehicle providing contact sensor data on regolith composition. Another set of instruments will be housed on the platform of the rover, including VIS-NIR imagers and spectrometers, both for far-field context and near-field characterization of the regolith in the immediate vicinity of the rover. Also included in the sensor suite are a neutron spectrometer, ground-penetrating radar, and an instrumented cone penetrometer for subsurface assessment. Output from these sensors will be evaluated autonomously in real-time by decision-making software to evaluate if any of the targeted resources has been detected, and if so, to quantify their abundance. Algorithms for optimizing the mapping strategy based on target resource abundance and distribution are also included in the autonomous software. This approach emphasizes on-the-fly survey measurements to enable efficient and rapid prospecting of large areas, which will improve the economics of ISRU system approaches. The mature technology will enable autonomous rovers to create in-situ resource maps of lunar or other planetary surfaces, which will facilitate human and robotic exploration.
NASA Astrophysics Data System (ADS)
Hummel, Christiaan; Honkoop, Pieter; van der Meer, Jaap
2011-07-01
Doubt has been shed recently on the most popular optimal foraging theory stating that predators should maximize prey profitability, i.e., select that prey item that contains the highest energy content per handling time. We hypothesized that sea stars do not forage on blue mussels according to the classical optimal foraging theory but are actively avoiding damage that may be caused by e.g. capture of foraging on too-strong mussel shells, hence the sea stars will have a stronger preference for mussels that are smaller than the most profitable ones. Here we present experimental evidence of the sea star Asterias rubens as a predator that indeed chooses much smaller blue mussels Mytilus edulis to forage on than the most profitable ones. Hence this study does not support the optimal foraging theory. There may be other constraints involved in foraging than just optimizing energy intake, for example predators may also be concerned with preventing potential loss or damage of their foraging instruments.
Modeling and Optimization of Multiple Unmanned Aerial Vehicles System Architecture Alternatives
Wang, Weiping; He, Lei
2014-01-01
Unmanned aerial vehicle (UAV) systems have already been used in civilian activities, although very limitedly. Confronted different types of tasks, multi UAVs usually need to be coordinated. This can be extracted as a multi UAVs system architecture problem. Based on the general system architecture problem, a specific description of the multi UAVs system architecture problem is presented. Then the corresponding optimization problem and an efficient genetic algorithm with a refined crossover operator (GA-RX) is proposed to accomplish the architecting process iteratively in the rest of this paper. The availability and effectiveness of overall method is validated using 2 simulations based on 2 different scenarios. PMID:25140328
NASA Astrophysics Data System (ADS)
Wiegert, R. F.
2009-05-01
A man-portable Magnetic Scalar Triangulation and Ranging ("MagSTAR") technology for Detection, Localization and Classification (DLC) of unexploded ordnance (UXO) has been developed by Naval Surface Warfare Center Panama City Division (NSWC PCD) with support from the Strategic Environmental Research and Development Program (SERDP). Proof of principle of the MagSTAR concept and its unique advantages for real-time, high-mobility magnetic sensing applications have been demonstrated by field tests of a prototype man-portable MagSTAR sensor. The prototype comprises: a) An array of fluxgate magnetometers configured as a multi-tensor gradiometer, b) A GPS-synchronized signal processing system. c) Unique STAR algorithms for point-by-point, standoff DLC of magnetic targets. This paper outlines details of: i) MagSTAR theory, ii) Design and construction of the prototype sensor, iii) Signal processing algorithms recently developed to improve the technology's target-discrimination accuracy, iv) Results of field tests of the portable gradiometer system against magnetic dipole targets. The results demonstrate that the MagSTAR technology is capable of very accurate, high-speed localization of magnetic targets at standoff distances of several meters. These advantages could readily be transitioned to a wide range of defense, security and sensing applications to provide faster and more effective DLC of UXO and buried mines.
NASA Astrophysics Data System (ADS)
Hortos, William S.
2008-04-01
Proposed distributed wavelet-based algorithms are a means to compress sensor data received at the nodes forming a wireless sensor network (WSN) by exchanging information between neighboring sensor nodes. Local collaboration among nodes compacts the measurements, yielding a reduced fused set with equivalent information at far fewer nodes. Nodes may be equipped with multiple sensor types, each capable of sensing distinct phenomena: thermal, humidity, chemical, voltage, or image signals with low or no frequency content as well as audio, seismic or video signals within defined frequency ranges. Compression of the multi-source data through wavelet-based methods, distributed at active nodes, reduces downstream processing and storage requirements along the paths to sink nodes; it also enables noise suppression and more energy-efficient query routing within the WSN. Targets are first detected by the multiple sensors; then wavelet compression and data fusion are applied to the target returns, followed by feature extraction from the reduced data; feature data are input to target recognition/classification routines; targets are tracked during their sojourns through the area monitored by the WSN. Algorithms to perform these tasks are implemented in a distributed manner, based on a partition of the WSN into clusters of nodes. In this work, a scheme of collaborative processing is applied for hierarchical data aggregation and decorrelation, based on the sensor data itself and any redundant information, enabled by a distributed, in-cluster wavelet transform with lifting that allows multiple levels of resolution. The wavelet-based compression algorithm significantly decreases RF bandwidth and other resource use in target processing tasks. Following wavelet compression, features are extracted. The objective of feature extraction is to maximize the probabilities of correct target classification based on multi-source sensor measurements, while minimizing the resource expenditures at participating nodes. Therefore, the feature-extraction method based on the Haar DWT is presented that employs a maximum-entropy measure to determine significant wavelet coefficients. Features are formed by calculating the energy of coefficients grouped around the competing clusters. A DWT-based feature extraction algorithm used for vehicle classification in WSNs can be enhanced by an added rule for selecting the optimal number of resolution levels to improve the correct classification rate and reduce energy consumption expended in local algorithm computations. Published field trial data for vehicular ground targets, measured with multiple sensor types, are used to evaluate the wavelet-assisted algorithms. Extracted features are used in established target recognition routines, e.g., the Bayesian minimum-error-rate classifier, to compare the effects on the classification performance of the wavelet compression. Simulations of feature sets and recognition routines at different resolution levels in target scenarios indicate the impact on classification rates, while formulas are provided to estimate reduction in resource use due to distributed compression.
Tracing Star Formation Around Quasars With Polycyclic Aromatic Hydrocarbons
NASA Astrophysics Data System (ADS)
Bilton, Lawrence Edward
2016-09-01
The feedback processes linking quasar activity to galaxy stellar mass growth are not well understood. If star formation is closely causally linked to black hole accretion, one may expect star formation confined to nuclear regions rather than extended over several kpc scales. Since Polycyclic Aromatic Hydrocarbon (PAH) emission features are widely used as tracers of stellar formation, it is, therefore, possible to use PAH emission detected around QSOs to help resolve this question. PAH data from a sample of 63 QSOs procured from the Spitzer Space Telescope’s Infrared Spectrograph (IRS) is used, employing the Spectroscopic Modelling Analysis and Reduction Tool’s (SMART) Advanced Optimal (AdOpt) extraction routines. A composite spectrum was also produced to help determine the average conditions and compositions of star forming regions. It is found, from our high redshift (>1) sample of QSOs, there is a marginally significant extended star formation on average of 34 scales. At low redshift, the median extension after deconvolving the instrumental point spread function is 3.2 , potentially showing evolutionary variations in star formation activity. However, limitations of the spatial resolving power constrain the ability to make any absolute conclusive remarks. It is also found that the QSO/AGN composite has more neutral PAHs than the starbursting and the main sequence galaxies, consistent with the AGN having no contribution to heating the PAH emission, and also consistent with the average PAH emission found on scales (i.e. not confined to the nuclear regions). A tentative detection of water vapour emission from the gravitationally lensed Einstein Cross quasar, QSO J2237+0305, is also presented suggesting a strong molecular outflow possibly driven by the active nucleus.
A Large-Telescope Natural Guide Star AO System
NASA Technical Reports Server (NTRS)
Redding, David; Milman, Mark; Needels, Laura
1994-01-01
None given. From overview and conclusion:Keck Telescope case study. Objectives-low cost, good sky coverage. Approach--natural guide star at 0.8um, correcting at 2.2um.Concl- Good performance is possible for Keck with natural guide star AO system (SR>0.2 to mag 17+).AO-optimized CCD should b every effective. Optimizing td is very effective.Spatial Coadding is not effective except perhaps at extreme low light levels.
Multimodal optimization by using hybrid of artificial bee colony algorithm and BFGS algorithm
NASA Astrophysics Data System (ADS)
Anam, S.
2017-10-01
Optimization has become one of the important fields in Mathematics. Many problems in engineering and science can be formulated into optimization problems. They maybe have many local optima. The optimization problem with many local optima, known as multimodal optimization problem, is how to find the global solution. Several metaheuristic methods have been proposed to solve multimodal optimization problems such as Particle Swarm Optimization (PSO), Genetics Algorithm (GA), Artificial Bee Colony (ABC) algorithm, etc. The performance of the ABC algorithm is better than or similar to those of other population-based algorithms with the advantage of employing a fewer control parameters. The ABC algorithm also has the advantages of strong robustness, fast convergence and high flexibility. However, it has the disadvantages premature convergence in the later search period. The accuracy of the optimal value cannot meet the requirements sometimes. Broyden-Fletcher-Goldfarb-Shanno (BFGS) algorithm is a good iterative method for finding a local optimum. Compared with other local optimization methods, the BFGS algorithm is better. Based on the advantages of the ABC algorithm and the BFGS algorithm, this paper proposes a hybrid of the artificial bee colony algorithm and the BFGS algorithm to solve the multimodal optimization problem. The first step is that the ABC algorithm is run to find a point. In the second step is that the point obtained by the first step is used as an initial point of BFGS algorithm. The results show that the hybrid method can overcome from the basic ABC algorithm problems for almost all test function. However, if the shape of function is flat, the proposed method cannot work well.
An Improved Technique for the Photometry and Astrometry of Faint Companions
NASA Astrophysics Data System (ADS)
Burke, Daniel; Gladysz, Szymon; Roberts, Lewis; Devaney, Nicholas; Dainty, Chris
2009-07-01
We propose a new approach to differential astrometry and photometry of faint companions in adaptive optics images. It is based on a prewhitening matched filter, also referred to in the literature as the Hotelling observer. We focus on cases where the signal of the companion is located within the bright halo of the parent star. Using real adaptive optics data from the 3 m Shane telescope at the Lick Observatory, we compare the performance of the Hotelling algorithm with other estimation algorithms currently used for the same problem. The real single-star data are used to generate artificial binary objects with a range of magnitude ratios. In most cases, the Hotelling observer gives significantly lower astrometric and photometric errors. In the case of high Strehl ratio (SR) data (SR ≈ 0.5), the differential photometry of a binary star with a Δm = 4.5 and a separation of 0.6″ is better than 0.1 mag a factor of 2 lower than the other algorithms considered.
Finite element dynamic analysis on CDC STAR-100 computer
NASA Technical Reports Server (NTRS)
Noor, A. K.; Lambiotte, J. J., Jr.
1978-01-01
Computational algorithms are presented for the finite element dynamic analysis of structures on the CDC STAR-100 computer. The spatial behavior is described using higher-order finite elements. The temporal behavior is approximated by using either the central difference explicit scheme or Newmark's implicit scheme. In each case the analysis is broken up into a number of basic macro-operations. Discussion is focused on the organization of the computation and the mode of storage of different arrays to take advantage of the STAR pipeline capability. The potential of the proposed algorithms is discussed and CPU times are given for performing the different macro-operations for a shell modeled by higher order composite shallow shell elements having 80 degrees of freedom.
Taghanaki, Saeid Asgari; Kawahara, Jeremy; Miles, Brandon; Hamarneh, Ghassan
2017-07-01
Feature reduction is an essential stage in computer aided breast cancer diagnosis systems. Multilayer neural networks can be trained to extract relevant features by encoding high-dimensional data into low-dimensional codes. Optimizing traditional auto-encoders works well only if the initial weights are close to a proper solution. They are also trained to only reduce the mean squared reconstruction error (MRE) between the encoder inputs and the decoder outputs, but do not address the classification error. The goal of the current work is to test the hypothesis that extending traditional auto-encoders (which only minimize reconstruction error) to multi-objective optimization for finding Pareto-optimal solutions provides more discriminative features that will improve classification performance when compared to single-objective and other multi-objective approaches (i.e. scalarized and sequential). In this paper, we introduce a novel multi-objective optimization of deep auto-encoder networks, in which the auto-encoder optimizes two objectives: MRE and mean classification error (MCE) for Pareto-optimal solutions, rather than just MRE. These two objectives are optimized simultaneously by a non-dominated sorting genetic algorithm. We tested our method on 949 X-ray mammograms categorized into 12 classes. The results show that the features identified by the proposed algorithm allow a classification accuracy of up to 98.45%, demonstrating favourable accuracy over the results of state-of-the-art methods reported in the literature. We conclude that adding the classification objective to the traditional auto-encoder objective and optimizing for finding Pareto-optimal solutions, using evolutionary multi-objective optimization, results in producing more discriminative features. Copyright © 2017 Elsevier B.V. All rights reserved.
Patch-based image reconstruction for PET using prior-image derived dictionaries
NASA Astrophysics Data System (ADS)
Tahaei, Marzieh S.; Reader, Andrew J.
2016-09-01
In PET image reconstruction, regularization is often needed to reduce the noise in the resulting images. Patch-based image processing techniques have recently been successfully used for regularization in medical image reconstruction through a penalized likelihood framework. Re-parameterization within reconstruction is another powerful regularization technique in which the object in the scanner is re-parameterized using coefficients for spatially-extensive basis vectors. In this work, a method for extracting patch-based basis vectors from the subject’s MR image is proposed. The coefficients for these basis vectors are then estimated using the conventional MLEM algorithm. Furthermore, using the alternating direction method of multipliers, an algorithm for optimizing the Poisson log-likelihood while imposing sparsity on the parameters is also proposed. This novel method is then utilized to find sparse coefficients for the patch-based basis vectors extracted from the MR image. The results indicate the superiority of the proposed methods to patch-based regularization using the penalized likelihood framework.
Classification of Two Class Motor Imagery Tasks Using Hybrid GA-PSO Based K-Means Clustering.
Suraj; Tiwari, Purnendu; Ghosh, Subhojit; Sinha, Rakesh Kumar
2015-01-01
Transferring the brain computer interface (BCI) from laboratory condition to meet the real world application needs BCI to be applied asynchronously without any time constraint. High level of dynamism in the electroencephalogram (EEG) signal reasons us to look toward evolutionary algorithm (EA). Motivated by these two facts, in this work a hybrid GA-PSO based K-means clustering technique has been used to distinguish two class motor imagery (MI) tasks. The proposed hybrid GA-PSO based K-means clustering is found to outperform genetic algorithm (GA) and particle swarm optimization (PSO) based K-means clustering techniques in terms of both accuracy and execution time. The lesser execution time of hybrid GA-PSO technique makes it suitable for real time BCI application. Time frequency representation (TFR) techniques have been used to extract the feature of the signal under investigation. TFRs based features are extracted and relying on the concept of event related synchronization (ERD) and desynchronization (ERD) feature vector is formed.
Classification of Two Class Motor Imagery Tasks Using Hybrid GA-PSO Based K-Means Clustering
Suraj; Tiwari, Purnendu; Ghosh, Subhojit; Sinha, Rakesh Kumar
2015-01-01
Transferring the brain computer interface (BCI) from laboratory condition to meet the real world application needs BCI to be applied asynchronously without any time constraint. High level of dynamism in the electroencephalogram (EEG) signal reasons us to look toward evolutionary algorithm (EA). Motivated by these two facts, in this work a hybrid GA-PSO based K-means clustering technique has been used to distinguish two class motor imagery (MI) tasks. The proposed hybrid GA-PSO based K-means clustering is found to outperform genetic algorithm (GA) and particle swarm optimization (PSO) based K-means clustering techniques in terms of both accuracy and execution time. The lesser execution time of hybrid GA-PSO technique makes it suitable for real time BCI application. Time frequency representation (TFR) techniques have been used to extract the feature of the signal under investigation. TFRs based features are extracted and relying on the concept of event related synchronization (ERD) and desynchronization (ERD) feature vector is formed. PMID:25972896
Interior search algorithm (ISA): a novel approach for global optimization.
Gandomi, Amir H
2014-07-01
This paper presents the interior search algorithm (ISA) as a novel method for solving optimization tasks. The proposed ISA is inspired by interior design and decoration. The algorithm is different from other metaheuristic algorithms and provides new insight for global optimization. The proposed method is verified using some benchmark mathematical and engineering problems commonly used in the area of optimization. ISA results are further compared with well-known optimization algorithms. The results show that the ISA is efficiently capable of solving optimization problems. The proposed algorithm can outperform the other well-known algorithms. Further, the proposed algorithm is very simple and it only has one parameter to tune. Copyright © 2014 ISA. Published by Elsevier Ltd. All rights reserved.
Cui, Lingli; Wu, Na; Wang, Wenjing; Kang, Chenhui
2014-01-01
This paper presents a new method for a composite dictionary matching pursuit algorithm, which is applied to vibration sensor signal feature extraction and fault diagnosis of a gearbox. Three advantages are highlighted in the new method. First, the composite dictionary in the algorithm has been changed from multi-atom matching to single-atom matching. Compared to non-composite dictionary single-atom matching, the original composite dictionary multi-atom matching pursuit (CD-MaMP) algorithm can achieve noise reduction in the reconstruction stage, but it cannot dramatically reduce the computational cost and improve the efficiency in the decomposition stage. Therefore, the optimized composite dictionary single-atom matching algorithm (CD-SaMP) is proposed. Second, the termination condition of iteration based on the attenuation coefficient is put forward to improve the sparsity and efficiency of the algorithm, which adjusts the parameters of the termination condition constantly in the process of decomposition to avoid noise. Third, composite dictionaries are enriched with the modulation dictionary, which is one of the important structural characteristics of gear fault signals. Meanwhile, the termination condition of iteration settings, sub-feature dictionary selections and operation efficiency between CD-MaMP and CD-SaMP are discussed, aiming at gear simulation vibration signals with noise. The simulation sensor-based vibration signal results show that the termination condition of iteration based on the attenuation coefficient enhances decomposition sparsity greatly and achieves a good effect of noise reduction. Furthermore, the modulation dictionary achieves a better matching effect compared to the Fourier dictionary, and CD-SaMP has a great advantage of sparsity and efficiency compared with the CD-MaMP. The sensor-based vibration signals measured from practical engineering gearbox analyses have further shown that the CD-SaMP decomposition and reconstruction algorithm is feasible and effective. PMID:25207870
Cui, Lingli; Wu, Na; Wang, Wenjing; Kang, Chenhui
2014-09-09
This paper presents a new method for a composite dictionary matching pursuit algorithm, which is applied to vibration sensor signal feature extraction and fault diagnosis of a gearbox. Three advantages are highlighted in the new method. First, the composite dictionary in the algorithm has been changed from multi-atom matching to single-atom matching. Compared to non-composite dictionary single-atom matching, the original composite dictionary multi-atom matching pursuit (CD-MaMP) algorithm can achieve noise reduction in the reconstruction stage, but it cannot dramatically reduce the computational cost and improve the efficiency in the decomposition stage. Therefore, the optimized composite dictionary single-atom matching algorithm (CD-SaMP) is proposed. Second, the termination condition of iteration based on the attenuation coefficient is put forward to improve the sparsity and efficiency of the algorithm, which adjusts the parameters of the termination condition constantly in the process of decomposition to avoid noise. Third, composite dictionaries are enriched with the modulation dictionary, which is one of the important structural characteristics of gear fault signals. Meanwhile, the termination condition of iteration settings, sub-feature dictionary selections and operation efficiency between CD-MaMP and CD-SaMP are discussed, aiming at gear simulation vibration signals with noise. The simulation sensor-based vibration signal results show that the termination condition of iteration based on the attenuation coefficient enhances decomposition sparsity greatly and achieves a good effect of noise reduction. Furthermore, the modulation dictionary achieves a better matching effect compared to the Fourier dictionary, and CD-SaMP has a great advantage of sparsity and efficiency compared with the CD-MaMP. The sensor-based vibration signals measured from practical engineering gearbox analyses have further shown that the CD-SaMP decomposition and reconstruction algorithm is feasible and effective.
Celik, Yuksel; Ulker, Erkan
2013-01-01
Marriage in honey bees optimization (MBO) is a metaheuristic optimization algorithm developed by inspiration of the mating and fertilization process of honey bees and is a kind of swarm intelligence optimizations. In this study we propose improved marriage in honey bees optimization (IMBO) by adding Levy flight algorithm for queen mating flight and neighboring for worker drone improving. The IMBO algorithm's performance and its success are tested on the well-known six unconstrained test functions and compared with other metaheuristic optimization algorithms. PMID:23935416
Highlights of Odessa Branch of AN in 2017
NASA Astrophysics Data System (ADS)
Andronov, I. L.
2017-12-01
An annual report with a list of publications. Our group works on the variable star research within the international campaign "Inter-Longitude Astronomy" (ILA) based on temporarily working groups in collaboration with Poland, Slovakia, Korea, USA and other countries. A recent self-review on highlights was published in 2017. Our group continues the scientific school of Prof. Vladymir P. Tsesevich (1907 - 1983). Another project we participate is "AstroInformatics". The unprecedented photo-polarimetric monitoring of a group of AM Her - type magnetic cataclysmic variable stars was carried out since 1989 (photometry in our group - since 1978). A photometric monitoring of the intermediate polars (MU Cam, V1343 Her, V2306 Cyg et al.) was continued to study rotational evolution of magnetic white dwarfs. The super-low luminosity state was discovered in the outbursting intermediate polar = magnetic dwarf nova DO Dra. Previously typical low state was some times interrupted by outbursts, which are narrower than usual dwarf nova outbursts. Once there were detected TPO - "Transient Periodic Oscillations". The orbital and quasi-periodic variability was recently studied. Such super-low states are characteristic for nova-like variables (e.g. MV Lyr, TT Ari) or intermediate polars, but unusual for the dwarf novae. The electronic "Catalogue of Characteristics and Atlas of the Light Curves of Newly-Discovered Eclipsing Binary Stars" was compiled and is being prepared for publication. The software NAV ("New Algol Variable") with specially developed algorithms was used. It allows to determine the begin and end of the eclipses even in EB and EW - type stars, whereas the current classification (GCVS, VSX) claims that the begin and end of eclipses only in the EA - type objects. The further improvements of the NAV algorithm were comparatively studied. The "Wall-Supported Polynomial" (WSP) algoritms were implemented in the software MAVKA for statistically optimal modeling of flat eclipses and exoplanet transitions. MAVKA was used for studies of effects of the mass transfer and presence of the third components in close binary stellar systems and analysis of the poorly studied eclipsing binary 2MASS J20355082+5242136. Atlas of the Light Curves and Phase Plane Portraits of Selected Long-Period Variables was compiled.
Leelarungrayub, Jirakrit; Yankai, Araya; Pinkaew, Decha; Puntumetakul, Rungthip; Laskin, James J; Bloomer, Richard J
2016-01-01
The aims of this preliminary study were to evaluate the antioxidant and lipid status before and after star fruit juice consumption in healthy elderly subjects, and the vitamins in star fruit extracts. A preliminary designated protocol was performed in 27 elderly individuals with a mean (±SD) age of 69.5±5.3 years, by planning a 2-week control period before 4 weeks of consumption of star fruit twice daily. Oxidative stress parameters such as total antioxidant capacity, glutathione, malondialdehyde, protein hydroperoxide, multivitamins such as l-ascorbic acid (Vit C), retinoic acid (Vit A), and tocopherol (Vit E), and the lipid profile parameters such as cholesterol, triglyceride, high-density lipoprotein-cholesterol (HDL-C) and low-density lipoprotein-cholesterol (LDL-C) were analyzed. Moreover, Vit C, Vit A, and Vit E levels were evaluated in the star fruit extracts during the 4-week period. In the 2-week control period, all parameters showed no statistically significant difference; after 4 weeks of consumption, significant improvement in the antioxidant status was observed with increased total antioxidant capacity and reduced malondialdehyde and protein hydroperoxide levels, as well as significantly increased levels of Vit C and Vit A, when compared to the two-time evaluation during the baseline periods. However, glutathione and Vit E showed no statistical difference. In addition, the HDL-C level was higher and the LDL-C level was significantly lower when compared to both baseline periods. But the levels of triglyceride and cholesterol showed no difference. Vit C and Vit A were identified in small quantities in the star fruit extract. This preliminary study suggested that consumption of star fruit juice twice daily for 1 month improved the elderly people's antioxidant status and vitamins, as well as improved the lipoproteins related to Vit C and Vit A in the star fruit extract.
Leelarungrayub, Jirakrit; Yankai, Araya; Pinkaew, Decha; Puntumetakul, Rungthip; Laskin, James J; Bloomer, Richard J
2016-01-01
Objective The aims of this preliminary study were to evaluate the antioxidant and lipid status before and after star fruit juice consumption in healthy elderly subjects, and the vitamins in star fruit extracts. Methods A preliminary designated protocol was performed in 27 elderly individuals with a mean (±SD) age of 69.5±5.3 years, by planning a 2-week control period before 4 weeks of consumption of star fruit twice daily. Oxidative stress parameters such as total antioxidant capacity, glutathione, malondialdehyde, protein hydroperoxide, multivitamins such as l-ascorbic acid (Vit C), retinoic acid (Vit A), and tocopherol (Vit E), and the lipid profile parameters such as cholesterol, triglyceride, high-density lipoprotein-cholesterol (HDL-C) and low-density lipoprotein-cholesterol (LDL-C) were analyzed. Moreover, Vit C, Vit A, and Vit E levels were evaluated in the star fruit extracts during the 4-week period. Results In the 2-week control period, all parameters showed no statistically significant difference; after 4 weeks of consumption, significant improvement in the antioxidant status was observed with increased total antioxidant capacity and reduced malondialdehyde and protein hydroperoxide levels, as well as significantly increased levels of Vit C and Vit A, when compared to the two-time evaluation during the baseline periods. However, glutathione and Vit E showed no statistical difference. In addition, the HDL-C level was higher and the LDL-C level was significantly lower when compared to both baseline periods. But the levels of triglyceride and cholesterol showed no difference. Vit C and Vit A were identified in small quantities in the star fruit extract. Conclusion This preliminary study suggested that consumption of star fruit juice twice daily for 1 month improved the elderly people’s antioxidant status and vitamins, as well as improved the lipoproteins related to Vit C and Vit A in the star fruit extract. PMID:27621606
Constrained Null Space Component Analysis for Semiblind Source Separation Problem.
Hwang, Wen-Liang; Lu, Keng-Shih; Ho, Jinn
2018-02-01
The blind source separation (BSS) problem extracts unknown sources from observations of their unknown mixtures. A current trend in BSS is the semiblind approach, which incorporates prior information on sources or how the sources are mixed. The constrained independent component analysis (ICA) approach has been studied to impose constraints on the famous ICA framework. We introduced an alternative approach based on the null space component (NCA) framework and referred to the approach as the c-NCA approach. We also presented the c-NCA algorithm that uses signal-dependent semidefinite operators, which is a bilinear mapping, as signatures for operator design in the c-NCA approach. Theoretically, we showed that the source estimation of the c-NCA algorithm converges with a convergence rate dependent on the decay of the sequence, obtained by applying the estimated operators on corresponding sources. The c-NCA can be formulated as a deterministic constrained optimization method, and thus, it can take advantage of solvers developed in optimization society for solving the BSS problem. As examples, we demonstrated electroencephalogram interference rejection problems can be solved by the c-NCA with proximal splitting algorithms by incorporating a sparsity-enforcing separation model and considering the case when reference signals are available.
NASA Astrophysics Data System (ADS)
Wihartiko, F. D.; Wijayanti, H.; Virgantari, F.
2018-03-01
Genetic Algorithm (GA) is a common algorithm used to solve optimization problems with artificial intelligence approach. Similarly, the Particle Swarm Optimization (PSO) algorithm. Both algorithms have different advantages and disadvantages when applied to the case of optimization of the Model Integer Programming for Bus Timetabling Problem (MIPBTP), where in the case of MIPBTP will be found the optimal number of trips confronted with various constraints. The comparison results show that the PSO algorithm is superior in terms of complexity, accuracy, iteration and program simplicity in finding the optimal solution.
User-customized brain computer interfaces using Bayesian optimization
NASA Astrophysics Data System (ADS)
Bashashati, Hossein; Ward, Rabab K.; Bashashati, Ali
2016-04-01
Objective. The brain characteristics of different people are not the same. Brain computer interfaces (BCIs) should thus be customized for each individual person. In motor-imagery based synchronous BCIs, a number of parameters (referred to as hyper-parameters) including the EEG frequency bands, the channels and the time intervals from which the features are extracted should be pre-determined based on each subject’s brain characteristics. Approach. To determine the hyper-parameter values, previous work has relied on manual or semi-automatic methods that are not applicable to high-dimensional search spaces. In this paper, we propose a fully automatic, scalable and computationally inexpensive algorithm that uses Bayesian optimization to tune these hyper-parameters. We then build different classifiers trained on the sets of hyper-parameter values proposed by the Bayesian optimization. A final classifier aggregates the results of the different classifiers. Main Results. We have applied our method to 21 subjects from three BCI competition datasets. We have conducted rigorous statistical tests, and have shown the positive impact of hyper-parameter optimization in improving the accuracy of BCIs. Furthermore, We have compared our results to those reported in the literature. Significance. Unlike the best reported results in the literature, which are based on more sophisticated feature extraction and classification methods, and rely on prestudies to determine the hyper-parameter values, our method has the advantage of being fully automated, uses less sophisticated feature extraction and classification methods, and yields similar or superior results compared to the best performing designs in the literature.
NASA Astrophysics Data System (ADS)
Hayana Hasibuan, Eka; Mawengkang, Herman; Efendi, Syahril
2017-12-01
The use of Partical Swarm Optimization Algorithm in this research is to optimize the feature weights on the Voting Feature Interval 5 algorithm so that we can find the model of using PSO algorithm with VFI 5. Optimization of feature weight on Diabetes or Dyspesia data is considered important because it is very closely related to the livelihood of many people, so if there is any inaccuracy in determining the most dominant feature weight in the data will cause death. Increased accuracy by using PSO Algorithm ie fold 1 from 92.31% to 96.15% increase accuracy of 3.8%, accuracy of fold 2 on Algorithm VFI5 of 92.52% as well as generated on PSO Algorithm means accuracy fixed, then in fold 3 increase accuracy of 85.19% Increased to 96.29% Accuracy increased by 11%. The total accuracy of all three trials increased by 14%. In general the Partical Swarm Optimization algorithm has succeeded in increasing the accuracy to several fold, therefore it can be concluded the PSO algorithm is well used in optimizing the VFI5 Classification Algorithm.
Optimizing spread dynamics on graphs by message passing
NASA Astrophysics Data System (ADS)
Altarelli, F.; Braunstein, A.; Dall'Asta, L.; Zecchina, R.
2013-09-01
Cascade processes are responsible for many important phenomena in natural and social sciences. Simple models of irreversible dynamics on graphs, in which nodes activate depending on the state of their neighbors, have been successfully applied to describe cascades in a large variety of contexts. Over the past decades, much effort has been devoted to understanding the typical behavior of the cascades arising from initial conditions extracted at random from some given ensemble. However, the problem of optimizing the trajectory of the system, i.e. of identifying appropriate initial conditions to maximize (or minimize) the final number of active nodes, is still considered to be practically intractable, with the only exception being models that satisfy a sort of diminishing returns property called submodularity. Submodular models can be approximately solved by means of greedy strategies, but by definition they lack cooperative characteristics which are fundamental in many real systems. Here we introduce an efficient algorithm based on statistical physics for the optimization of trajectories in cascade processes on graphs. We show that for a wide class of irreversible dynamics, even in the absence of submodularity, the spread optimization problem can be solved efficiently on large networks. Analytic and algorithmic results on random graphs are complemented by the solution of the spread maximization problem on a real-world network (the Epinions consumer reviews network).
Dual-threshold segmentation using Arimoto entropy based on chaotic bee colony optimization
NASA Astrophysics Data System (ADS)
Li, Li
2018-03-01
In order to extract target from complex background more quickly and accurately, and to further improve the detection effect of defects, a method of dual-threshold segmentation using Arimoto entropy based on chaotic bee colony optimization was proposed. Firstly, the method of single-threshold selection based on Arimoto entropy was extended to dual-threshold selection in order to separate the target from the background more accurately. Then intermediate variables in formulae of Arimoto entropy dual-threshold selection was calculated by recursion to eliminate redundant computation effectively and to reduce the amount of calculation. Finally, the local search phase of artificial bee colony algorithm was improved by chaotic sequence based on tent mapping. The fast search for two optimal thresholds was achieved using the improved bee colony optimization algorithm, thus the search could be accelerated obviously. A large number of experimental results show that, compared with the existing segmentation methods such as multi-threshold segmentation method using maximum Shannon entropy, two-dimensional Shannon entropy segmentation method, two-dimensional Tsallis gray entropy segmentation method and multi-threshold segmentation method using reciprocal gray entropy, the proposed method can segment target more quickly and accurately with superior segmentation effect. It proves to be an instant and effective method for image segmentation.
NASA Astrophysics Data System (ADS)
Zhang, Xin; Liu, Zhiwen; Miao, Qiang; Wang, Lei
2018-03-01
A time varying filtering based empirical mode decomposition (EMD) (TVF-EMD) method was proposed recently to solve the mode mixing problem of EMD method. Compared with the classical EMD, TVF-EMD was proven to improve the frequency separation performance and be robust to noise interference. However, the decomposition parameters (i.e., bandwidth threshold and B-spline order) significantly affect the decomposition results of this method. In original TVF-EMD method, the parameter values are assigned in advance, which makes it difficult to achieve satisfactory analysis results. To solve this problem, this paper develops an optimized TVF-EMD method based on grey wolf optimizer (GWO) algorithm for fault diagnosis of rotating machinery. Firstly, a measurement index termed weighted kurtosis index is constructed by using kurtosis index and correlation coefficient. Subsequently, the optimal TVF-EMD parameters that match with the input signal can be obtained by GWO algorithm using the maximum weighted kurtosis index as objective function. Finally, fault features can be extracted by analyzing the sensitive intrinsic mode function (IMF) owning the maximum weighted kurtosis index. Simulations and comparisons highlight the performance of TVF-EMD method for signal decomposition, and meanwhile verify the fact that bandwidth threshold and B-spline order are critical to the decomposition results. Two case studies on rotating machinery fault diagnosis demonstrate the effectiveness and advantages of the proposed method.
The Spatial Distribution of Resolved Young Stars in Blue Compact Dwarf Galaxies
NASA Astrophysics Data System (ADS)
Murphy, K.; Crone, M. M.
2002-12-01
We present the first results from a survey of the distribution of resolved young stars in Blue Compact Dwarf Galaxies. In order to identify the dominant physical processes driving star formation in these puzzling galaxies, we use a multi-scale cluster-finding algorithm to quantify the characteristic scales and properties of star-forming regions, from sizes smaller than 10 pc up to the size of each entire galaxy. This project was partially funded by the Lubin Chair at Skidmore College.
Advanced Dispersed Fringe Sensing Algorithm for Coarse Phasing Segmented Mirror Telescopes
NASA Technical Reports Server (NTRS)
Spechler, Joshua A.; Hoppe, Daniel J.; Sigrist, Norbert; Shi, Fang; Seo, Byoung-Joon; Bikkannavar, Siddarayappa A.
2013-01-01
Segment mirror phasing, a critical step of segment mirror alignment, requires the ability to sense and correct the relative pistons between segments from up to a few hundred microns to a fraction of wavelength in order to bring the mirror system to its full diffraction capability. When sampling the aperture of a telescope, using auto-collimating flats (ACFs) is more economical. The performance of a telescope with a segmented primary mirror strongly depends on how well those primary mirror segments can be phased. One such process to phase primary mirror segments in the axial piston direction is dispersed fringe sensing (DFS). DFS technology can be used to co-phase the ACFs. DFS is essentially a signal fitting and processing operation. It is an elegant method of coarse phasing segmented mirrors. DFS performance accuracy is dependent upon careful calibration of the system as well as other factors such as internal optical alignment, system wavefront errors, and detector quality. Novel improvements to the algorithm have led to substantial enhancements in DFS performance. The Advanced Dispersed Fringe Sensing (ADFS) Algorithm is designed to reduce the sensitivity to calibration errors by determining the optimal fringe extraction line. Applying an angular extraction line dithering procedure and combining this dithering process with an error function while minimizing the phase term of the fitted signal, defines in essence the ADFS algorithm.
A Modified Active Appearance Model Based on an Adaptive Artificial Bee Colony
Othman, Zulaiha Ali
2014-01-01
Active appearance model (AAM) is one of the most popular model-based approaches that have been extensively used to extract features by highly accurate modeling of human faces under various physical and environmental circumstances. However, in such active appearance model, fitting the model with original image is a challenging task. State of the art shows that optimization method is applicable to resolve this problem. However, another common problem is applying optimization. Hence, in this paper we propose an AAM based face recognition technique, which is capable of resolving the fitting problem of AAM by introducing a new adaptive ABC algorithm. The adaptation increases the efficiency of fitting as against the conventional ABC algorithm. We have used three datasets: CASIA dataset, property 2.5D face dataset, and UBIRIS v1 images dataset in our experiments. The results have revealed that the proposed face recognition technique has performed effectively, in terms of accuracy of face recognition. PMID:25165748
Enhancing speech recognition using improved particle swarm optimization based hidden Markov model.
Selvaraj, Lokesh; Ganesan, Balakrishnan
2014-01-01
Enhancing speech recognition is the primary intention of this work. In this paper a novel speech recognition method based on vector quantization and improved particle swarm optimization (IPSO) is suggested. The suggested methodology contains four stages, namely, (i) denoising, (ii) feature mining (iii), vector quantization, and (iv) IPSO based hidden Markov model (HMM) technique (IP-HMM). At first, the speech signals are denoised using median filter. Next, characteristics such as peak, pitch spectrum, Mel frequency Cepstral coefficients (MFCC), mean, standard deviation, and minimum and maximum of the signal are extorted from the denoised signal. Following that, to accomplish the training process, the extracted characteristics are given to genetic algorithm based codebook generation in vector quantization. The initial populations are created by selecting random code vectors from the training set for the codebooks for the genetic algorithm process and IP-HMM helps in doing the recognition. At this point the creativeness will be done in terms of one of the genetic operation crossovers. The proposed speech recognition technique offers 97.14% accuracy.
Fast Human Detection for Intelligent Monitoring Using Surveillance Visible Sensors
Ko, Byoung Chul; Jeong, Mira; Nam, JaeYeal
2014-01-01
Human detection using visible surveillance sensors is an important and challenging work for intruder detection and safety management. The biggest barrier of real-time human detection is the computational time required for dense image scaling and scanning windows extracted from an entire image. This paper proposes fast human detection by selecting optimal levels of image scale using each level's adaptive region-of-interest (ROI). To estimate the image-scaling level, we generate a Hough windows map (HWM) and select a few optimal image scales based on the strength of the HWM and the divide-and-conquer algorithm. Furthermore, adaptive ROIs are arranged per image scale to provide a different search area. We employ a cascade random forests classifier to separate candidate windows into human and nonhuman classes. The proposed algorithm has been successfully applied to real-world surveillance video sequences, and its detection accuracy and computational speed show a better performance than those of other related methods. PMID:25393782
NASA Astrophysics Data System (ADS)
Hadia, Sarman K.; Thakker, R. A.; Bhatt, Kirit R.
2016-05-01
The study proposes an application of evolutionary algorithms, specifically an artificial bee colony (ABC), variant ABC and particle swarm optimisation (PSO), to extract the parameters of metal oxide semiconductor field effect transistor (MOSFET) model. These algorithms are applied for the MOSFET parameter extraction problem using a Pennsylvania surface potential model. MOSFET parameter extraction procedures involve reducing the error between measured and modelled data. This study shows that ABC algorithm optimises the parameter values based on intelligent activities of honey bee swarms. Some modifications have also been applied to the basic ABC algorithm. Particle swarm optimisation is a population-based stochastic optimisation method that is based on bird flocking activities. The performances of these algorithms are compared with respect to the quality of the solutions. The simulation results of this study show that the PSO algorithm performs better than the variant ABC and basic ABC algorithm for the parameter extraction of the MOSFET model; also the implementation of the ABC algorithm is shown to be simpler than that of the PSO algorithm.
Adaptive cockroach swarm algorithm
NASA Astrophysics Data System (ADS)
Obagbuwa, Ibidun C.; Abidoye, Ademola P.
2017-07-01
An adaptive cockroach swarm optimization (ACSO) algorithm is proposed in this paper to strengthen the existing cockroach swarm optimization (CSO) algorithm. The ruthless component of CSO algorithm is modified by the employment of blend crossover predator-prey evolution method which helps algorithm prevent any possible population collapse, maintain population diversity and create adaptive search in each iteration. The performance of the proposed algorithm on 16 global optimization benchmark function problems was evaluated and compared with the existing CSO, cuckoo search, differential evolution, particle swarm optimization and artificial bee colony algorithms.
A chaos wolf optimization algorithm with self-adaptive variable step-size
NASA Astrophysics Data System (ADS)
Zhu, Yong; Jiang, Wanlu; Kong, Xiangdong; Quan, Lingxiao; Zhang, Yongshun
2017-10-01
To explore the problem of parameter optimization for complex nonlinear function, a chaos wolf optimization algorithm (CWOA) with self-adaptive variable step-size was proposed. The algorithm was based on the swarm intelligence of wolf pack, which fully simulated the predation behavior and prey distribution way of wolves. It possessed three intelligent behaviors such as migration, summons and siege. And the competition rule as "winner-take-all" and the update mechanism as "survival of the fittest" were also the characteristics of the algorithm. Moreover, it combined the strategies of self-adaptive variable step-size search and chaos optimization. The CWOA was utilized in parameter optimization of twelve typical and complex nonlinear functions. And the obtained results were compared with many existing algorithms, including the classical genetic algorithm, the particle swarm optimization algorithm and the leader wolf pack search algorithm. The investigation results indicate that CWOA possess preferable optimization ability. There are advantages in optimization accuracy and convergence rate. Furthermore, it demonstrates high robustness and global searching ability.
Embedded algorithms within an FPGA-based system to process nonlinear time series data
NASA Astrophysics Data System (ADS)
Jones, Jonathan D.; Pei, Jin-Song; Tull, Monte P.
2008-03-01
This paper presents some preliminary results of an ongoing project. A pattern classification algorithm is being developed and embedded into a Field-Programmable Gate Array (FPGA) and microprocessor-based data processing core in this project. The goal is to enable and optimize the functionality of onboard data processing of nonlinear, nonstationary data for smart wireless sensing in structural health monitoring. Compared with traditional microprocessor-based systems, fast growing FPGA technology offers a more powerful, efficient, and flexible hardware platform including on-site (field-programmable) reconfiguration capability of hardware. An existing nonlinear identification algorithm is used as the baseline in this study. The implementation within a hardware-based system is presented in this paper, detailing the design requirements, validation, tradeoffs, optimization, and challenges in embedding this algorithm. An off-the-shelf high-level abstraction tool along with the Matlab/Simulink environment is utilized to program the FPGA, rather than coding the hardware description language (HDL) manually. The implementation is validated by comparing the simulation results with those from Matlab. In particular, the Hilbert Transform is embedded into the FPGA hardware and applied to the baseline algorithm as the centerpiece in processing nonlinear time histories and extracting instantaneous features of nonstationary dynamic data. The selection of proper numerical methods for the hardware execution of the selected identification algorithm and consideration of the fixed-point representation are elaborated. Other challenges include the issues of the timing in the hardware execution cycle of the design, resource consumption, approximation accuracy, and user flexibility of input data types limited by the simplicity of this preliminary design. Future work includes making an FPGA and microprocessor operate together to embed a further developed algorithm that yields better computational and power efficiency.
Fireworks algorithm for mean-VaR/CVaR models
NASA Astrophysics Data System (ADS)
Zhang, Tingting; Liu, Zhifeng
2017-10-01
Intelligent algorithms have been widely applied to portfolio optimization problems. In this paper, we introduce a novel intelligent algorithm, named fireworks algorithm, to solve the mean-VaR/CVaR model for the first time. The results show that, compared with the classical genetic algorithm, fireworks algorithm not only improves the optimization accuracy and the optimization speed, but also makes the optimal solution more stable. We repeat our experiments at different confidence levels and different degrees of risk aversion, and the results are robust. It suggests that fireworks algorithm has more advantages than genetic algorithm in solving the portfolio optimization problem, and it is feasible and promising to apply it into this field.
A reduction package for cross-dispersed echelle spectrograph data in IDL
NASA Astrophysics Data System (ADS)
Hall, Jeffrey C.; Neff, James E.
1992-12-01
We have written in IDL a data reduction package that performs reduction and extraction of cross-dispersed echelle spectrograph data. The present package includes a complete set of tools for extracting data from any number of spectral orders with arbitrary tilt and curvature. Essential elements include debiasing and flatfielding of the raw CCD image, removal of scattered light background, either nonoptimal or optimal extraction of data, and wavelength calibration and continuum normalization of the extracted orders. A growing set of support routines permits examination of the frame being processed to provide continuing checks on the statistical properties of the data and on the accuracy of the extraction. We will display some sample reductions and discuss the algorithms used. The inherent simplicity and user-friendliness of the IDL interface make this package a useful tool for spectroscopists. We will provide an email distribution list for those interested in receiving the package, and further documentation will be distributed at the meeting.
Automated diagnosis of coronary artery disease (CAD) patients using optimized SVM.
Davari Dolatabadi, Azam; Khadem, Siamak Esmael Zadeh; Asl, Babak Mohammadzadeh
2017-01-01
Currently Coronary Artery Disease (CAD) is one of the most prevalent diseases, and also can lead to death, disability and economic loss in patients who suffer from cardiovascular disease. Diagnostic procedures of this disease by medical teams are typically invasive, although they do not satisfy the required accuracy. In this study, we have proposed a methodology for the automatic diagnosis of normal and Coronary Artery Disease conditions using Heart Rate Variability (HRV) signal extracted from electrocardiogram (ECG). The features are extracted from HRV signal in time, frequency and nonlinear domains. The Principal Component Analysis (PCA) is applied to reduce the dimension of the extracted features in order to reduce computational complexity and to reveal the hidden information underlaid in the data. Finally, Support Vector Machine (SVM) classifier has been utilized to classify two classes of data using the extracted distinguishing features. In this paper, parameters of the SVM have been optimized in order to improve the accuracy. Provided reports in this paper indicate that the detection of CAD class from normal class using the proposed algorithm was performed with accuracy of 99.2%, sensitivity of 98.43%, and specificity of 100%. This study has shown that methods which are based on the feature extraction of the biomedical signals are an appropriate approach to predict the health situation of the patients. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
A simple, remote, video based breathing monitor.
Regev, Nir; Wulich, Dov
2017-07-01
Breathing monitors have become the all-important cornerstone of a wide variety of commercial and personal safety applications, ranging from elderly care to baby monitoring. Many such monitors exist in the market, some, with vital signs monitoring capabilities, but none remote. This paper presents a simple, yet efficient, real time method of extracting the subject's breathing sinus rhythm. Points of interest are detected on the subject's body, and the corresponding optical flow is estimated and tracked using the well known Lucas-Kanade algorithm on a frame by frame basis. A generalized likelihood ratio test is then utilized on each of the many interest points to detect which is moving in harmonic fashion. Finally, a spectral estimation algorithm based on Pisarenko harmonic decomposition tracks the harmonic frequency in real time, and a fusion maximum likelihood algorithm optimally estimates the breathing rate using all points considered. The results show a maximal error of 1 BPM between the true breathing rate and the algorithm's calculated rate, based on experiments on two babies and three adults.
In Silico Synthesis of Synthetic Receptors: A Polymerization Algorithm.
Cowen, Todd; Busato, Mirko; Karim, Kal; Piletsky, Sergey A
2016-12-01
Molecularly imprinted polymer (MIP) synthetic receptors have proposed and applied applications in chemical extraction, sensors, assays, catalysis, targeted drug delivery, and direct inhibition of harmful chemicals and pathogens. However, they rely heavily on effective design for success. An algorithm has been written which mimics radical polymerization atomistically, accounting for chemical and spatial discrimination, hybridization, and geometric optimization. Synthetic ephedrine receptors were synthesized in silico to demonstrate the accuracy of the algorithm in reproducing polymers structures at the atomic level. Comparative analysis in the design of a synthetic ephedrine receptor demonstrates that the new method can effectively identify affinity trends and binding site selectivities where commonly used alternative methods cannot. This new method is believed to generate the most realistic models of MIPs thus produced. This suggests that the algorithm could be a powerful new tool in the design and analysis of various polymers, including MIPs, with significant implications in areas of biotechnology, biomimetics, and the materials sciences more generally. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Improved Seam-Line Searching Algorithm for UAV Image Mosaic with Optical Flow.
Zhang, Weilong; Guo, Bingxuan; Li, Ming; Liao, Xuan; Li, Wenzhuo
2018-04-16
Ghosting and seams are two major challenges in creating unmanned aerial vehicle (UAV) image mosaic. In response to these problems, this paper proposes an improved method for UAV image seam-line searching. First, an image matching algorithm is used to extract and match the features of adjacent images, so that they can be transformed into the same coordinate system. Then, the gray scale difference, the gradient minimum, and the optical flow value of pixels in adjacent image overlapped area in a neighborhood are calculated, which can be applied to creating an energy function for seam-line searching. Based on that, an improved dynamic programming algorithm is proposed to search the optimal seam-lines to complete the UAV image mosaic. This algorithm adopts a more adaptive energy aggregation and traversal strategy, which can find a more ideal splicing path for adjacent UAV images and avoid the ground objects better. The experimental results show that the proposed method can effectively solve the problems of ghosting and seams in the panoramic UAV images.
The Optical Gravitational Lensing Experiment. Eclipsing Binary Stars in the Small Magellanic Cloud
NASA Astrophysics Data System (ADS)
Wyrzykowski, L.; Udalski, A.; Kubiak, M.; Szymanski, M. K.; Zebrun, K.; Soszynski, I.; Wozniak, P. R.; Pietrzynski, G.; Szewczyk, O.
2004-03-01
We present new version of the OGLE-II catalog of eclipsing binary stars detected in the Small Magellanic Cloud, based on Difference Image Analysis catalog of variable stars in the Magellanic Clouds containing data collected from 1997 to 2000. We found 1351 eclipsing binary stars in the central 2.4 square degree area of the SMC. 455 stars are newly discovered objects, not found in the previous release of the catalog. The eclipsing objects were selected with the automatic search algorithm based on the artificial neural network. The full catalog is accessible from the OGLE Internet archive.
Improving performances of suboptimal greedy iterative biclustering heuristics via localization.
Erten, Cesim; Sözdinler, Melih
2010-10-15
Biclustering gene expression data is the problem of extracting submatrices of genes and conditions exhibiting significant correlation across both the rows and the columns of a data matrix of expression values. Even the simplest versions of the problem are computationally hard. Most of the proposed solutions therefore employ greedy iterative heuristics that locally optimize a suitably assigned scoring function. We provide a fast and simple pre-processing algorithm called localization that reorders the rows and columns of the input data matrix in such a way as to group correlated entries in small local neighborhoods within the matrix. The proposed localization algorithm takes its roots from effective use of graph-theoretical methods applied to problems exhibiting a similar structure to that of biclustering. In order to evaluate the effectivenesss of the localization pre-processing algorithm, we focus on three representative greedy iterative heuristic methods. We show how the localization pre-processing can be incorporated into each representative algorithm to improve biclustering performance. Furthermore, we propose a simple biclustering algorithm, Random Extraction After Localization (REAL) that randomly extracts submatrices from the localization pre-processed data matrix, eliminates those with low similarity scores, and provides the rest as correlated structures representing biclusters. We compare the proposed localization pre-processing with another pre-processing alternative, non-negative matrix factorization. We show that our fast and simple localization procedure provides similar or even better results than the computationally heavy matrix factorization pre-processing with regards to H-value tests. We next demonstrate that the performances of the three representative greedy iterative heuristic methods improve with localization pre-processing when biological correlations in the form of functional enrichment and PPI verification constitute the main performance criteria. The fact that the random extraction method based on localization REAL performs better than the representative greedy heuristic methods under same criteria also confirms the effectiveness of the suggested pre-processing method. Supplementary material including code implementations in LEDA C++ library, experimental data, and the results are available at http://code.google.com/p/biclustering/ cesim@khas.edu.tr; melihsozdinler@boun.edu.tr Supplementary data are available at Bioinformatics online.
NASA Astrophysics Data System (ADS)
Long, Kim Chenming
Real-world engineering optimization problems often require the consideration of multiple conflicting and noncommensurate objectives, subject to nonconvex constraint regions in a high-dimensional decision space. Further challenges occur for combinatorial multiobjective problems in which the decision variables are not continuous. Traditional multiobjective optimization methods of operations research, such as weighting and epsilon constraint methods, are ill-suited to solving these complex, multiobjective problems. This has given rise to the application of a wide range of metaheuristic optimization algorithms, such as evolutionary, particle swarm, simulated annealing, and ant colony methods, to multiobjective optimization. Several multiobjective evolutionary algorithms have been developed, including the strength Pareto evolutionary algorithm (SPEA) and the non-dominated sorting genetic algorithm (NSGA), for determining the Pareto-optimal set of non-dominated solutions. Although numerous researchers have developed a wide range of multiobjective optimization algorithms, there is a continuing need to construct computationally efficient algorithms with an improved ability to converge to globally non-dominated solutions along the Pareto-optimal front for complex, large-scale, multiobjective engineering optimization problems. This is particularly important when the multiple objective functions and constraints of the real-world system cannot be expressed in explicit mathematical representations. This research presents a novel metaheuristic evolutionary algorithm for complex multiobjective optimization problems, which combines the metaheuristic tabu search algorithm with the evolutionary algorithm (TSEA), as embodied in genetic algorithms. TSEA is successfully applied to bicriteria (i.e., structural reliability and retrofit cost) optimization of the aircraft tail structure fatigue life, which increases its reliability by prolonging fatigue life. A comparison for this application of the proposed algorithm, TSEA, with several state-of-the-art multiobjective optimization algorithms reveals that TSEA outperforms these algorithms by providing retrofit solutions with greater reliability for the same costs (i.e., closer to the Pareto-optimal front) after the algorithms are executed for the same number of generations. This research also demonstrates that TSEA competes with and, in some situations, outperforms state-of-the-art multiobjective optimization algorithms such as NSGA II and SPEA 2 when applied to classic bicriteria test problems in the technical literature and other complex, sizable real-world applications. The successful implementation of TSEA contributes to the safety of aeronautical structures by providing a systematic way to guide aircraft structural retrofitting efforts, as well as a potentially useful algorithm for a wide range of multiobjective optimization problems in engineering and other fields.
Update on the SDSS-III MARVELS data pipeline development
NASA Astrophysics Data System (ADS)
Li, Rui; Ge, J.; Thomas, N. B.; Petersen, E.; Wang, J.; Ma, B.; Sithajan, S.; Shi, J.; Ouyang, Y.; Chen, Y.
2014-01-01
MARVELS (Multi-object APO Radial Velocity Exoplanet Large-area Survey), as one of the four surveys in the SDSS-III program, has monitored over 3,300 stars during 2008-2012, with each being visited an average of 26 times over a 2-year window. Although the early data pipeline was able to detect over 20 brown dwarf candidates and several hundreds of binaries, no giant planet candidates have been reliably identified due to its large systematic errors. Learning from past data pipeline lessons, we re-designed the entire pipeline to handle various types of systematic effects caused by the instrument (such as trace, slant, distortion, drifts and dispersion) and observation condition changes (such as illumination profile and continuum). We also introduced several advanced methods to precisely extract the RV signals. To date, we have achieved a long term RMS RV measurement error of 14 m/s for HIP-14810 (one of our reference stars) after removal of the known planet signal based on previous HIRES RV measurement. This new 1-D data pipeline has been used to robustly identify four giant planet candidates within the small fraction of the survey data that has been processed (Thomas et al. this meeting). The team is currently working hard to optimize the pipeline, especially the 2-D interference-fringe RV extraction, where early results show a 1.5 times improvement over the 1-D data pipeline. We are quickly approaching the survey baseline performance requirement of 10-35 m/s RMS for 8-12 solar type stars. With this fine-tuned pipeline and the soon to be processed plates of data, we expect to discover many more giant planet candidates and make a large statistical impact to the exoplanet study.
An Orthogonal Evolutionary Algorithm With Learning Automata for Multiobjective Optimization.
Dai, Cai; Wang, Yuping; Ye, Miao; Xue, Xingsi; Liu, Hailin
2016-12-01
Research on multiobjective optimization problems becomes one of the hottest topics of intelligent computation. In order to improve the search efficiency of an evolutionary algorithm and maintain the diversity of solutions, in this paper, the learning automata (LA) is first used for quantization orthogonal crossover (QOX), and a new fitness function based on decomposition is proposed to achieve these two purposes. Based on these, an orthogonal evolutionary algorithm with LA for complex multiobjective optimization problems with continuous variables is proposed. The experimental results show that in continuous states, the proposed algorithm is able to achieve accurate Pareto-optimal sets and wide Pareto-optimal fronts efficiently. Moreover, the comparison with the several existing well-known algorithms: nondominated sorting genetic algorithm II, decomposition-based multiobjective evolutionary algorithm, decomposition-based multiobjective evolutionary algorithm with an ensemble of neighborhood sizes, multiobjective optimization by LA, and multiobjective immune algorithm with nondominated neighbor-based selection, on 15 multiobjective benchmark problems, shows that the proposed algorithm is able to find more accurate and evenly distributed Pareto-optimal fronts than the compared ones.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hendi, S.H.; Bordbar, G.H.; Panah, B. Eslam
Motivated by the recent interests in spin−2 massive gravitons, we study the structure of neutron star in the context of massive gravity. The modifications of TOV equation in the presence of massive gravity are explored in 4 and higher dimensions. Next, by considering the modern equation of state for the neutron star matter (which is extracted by the lowest order constrained variational (LOCV) method with the AV18 potential), different physical properties of the neutron star (such as Le Chatelier's principle, stability and energy conditions) are investigated. It is shown that consideration of the massive gravity has specific contributions into themore » structure of neutron star and introduces new prescriptions for the massive astrophysical objects. The mass-radius relation is examined and the effects of massive gravity on the Schwarzschild radius, average density, compactness, gravitational redshift and dynamical stability are studied. Finally, a relation between mass and radius of neutron star versus the Planck mass is extracted.« less
Total soluble solids from banana: evaluation and optimization of extraction parameters.
Carvalho, Giovani B M; Silva, Daniel P; Santos, Júlio C; Izário Filho, Hélcio J; Vicente, António A; Teixeira, José A; Felipe, Maria das Graças A; Almeida e Silva, João B
2009-05-01
Banana, an important component in the diet of the global population, is one of the most consumed fruits in the world. This fruit is also very favorable to industry processes (e.g., fermented beverages) due to its rich content on soluble solids and minerals, with low acidity. The main objective of this work was to evaluate the influence of factors such as banana weight and extraction time during a hot aqueous extraction process on the total soluble solids content of banana. The extract is to be used by the food and beverage industries. The experiments were performed with 105 mL of water, considering the moisture of the ripe banana (65%). Total sugar concentrations were obtained in a beer analyzer and the result expressed in degrees Plato (degrees P, which is the weight of the extract or the sugar equivalent in 100 g solution at 20 degrees C), aiming at facilitating the use of these results by the beverage industries. After previous studies of characterization of the fruit and of ripening performance, a 2(2) full-factorial star design was carried out, and a model was developed to describe the behavior of the dependent variable (total soluble solids) as a function of the factors (banana weight and extraction time), indicating as optimum conditions for extraction 38.5 g of banana at 39.7 min.
Honey Bees Inspired Optimization Method: The Bees Algorithm.
Yuce, Baris; Packianather, Michael S; Mastrocinque, Ernesto; Pham, Duc Truong; Lambiase, Alfredo
2013-11-06
Optimization algorithms are search methods where the goal is to find an optimal solution to a problem, in order to satisfy one or more objective functions, possibly subject to a set of constraints. Studies of social animals and social insects have resulted in a number of computational models of swarm intelligence. Within these swarms their collective behavior is usually very complex. The collective behavior of a swarm of social organisms emerges from the behaviors of the individuals of that swarm. Researchers have developed computational optimization methods based on biology such as Genetic Algorithms, Particle Swarm Optimization, and Ant Colony. The aim of this paper is to describe an optimization algorithm called the Bees Algorithm, inspired from the natural foraging behavior of honey bees, to find the optimal solution. The algorithm performs both an exploitative neighborhood search combined with random explorative search. In this paper, after an explanation of the natural foraging behavior of honey bees, the basic Bees Algorithm and its improved versions are described and are implemented in order to optimize several benchmark functions, and the results are compared with those obtained with different optimization algorithms. The results show that the Bees Algorithm offering some advantage over other optimization methods according to the nature of the problem.
NASA Technical Reports Server (NTRS)
Lawton, Pat
2004-01-01
The objective of this work was to support the design of improved IUE NEWSIPS high dispersion extraction algorithms. The purpose of this work was to evaluate use of the Linearized Image (LIHI) file versus the Re-Sampled Image (SIHI) file, evaluate various extraction, and design algorithms for evaluation of IUE High Dispersion spectra. It was concluded the use of the Re-Sampled Image (SIHI) file was acceptable. Since the Gaussian profile worked well for the core and the Lorentzian profile worked well for the wings, the Voigt profile was chosen for use in the extraction algorithm. It was found that the gamma and sigma parameters varied significantly across the detector, so gamma and sigma masks for the SWP detector were developed. Extraction code was written.
Improving KPCA Online Extraction by Orthonormalization in the Feature Space.
Souza Filho, Joao B O; Diniz, Paulo S R
2018-04-01
Recently, some online kernel principal component analysis (KPCA) techniques based on the generalized Hebbian algorithm (GHA) were proposed for use in large data sets, defining kernel components using concise dictionaries automatically extracted from data. This brief proposes two new online KPCA extraction algorithms, exploiting orthogonalized versions of the GHA rule. In both the cases, the orthogonalization of kernel components is achieved by the inclusion of some low complexity additional steps to the kernel Hebbian algorithm, thus not substantially affecting the computational cost of the algorithm. Results show improved convergence speed and accuracy of components extracted by the proposed methods, as compared with the state-of-the-art online KPCA extraction algorithms.
Automatic theory generation from analyst text files using coherence networks
NASA Astrophysics Data System (ADS)
Shaffer, Steven C.
2014-05-01
This paper describes a three-phase process of extracting knowledge from analyst textual reports. Phase 1 involves performing natural language processing on the source text to extract subject-predicate-object triples. In phase 2, these triples are then fed into a coherence network analysis process, using a genetic algorithm optimization. Finally, the highest-value sub networks are processed into a semantic network graph for display. Initial work on a well- known data set (a Wikipedia article on Abraham Lincoln) has shown excellent results without any specific tuning. Next, we ran the process on the SYNthetic Counter-INsurgency (SYNCOIN) data set, developed at Penn State, yielding interesting and potentially useful results.
NASA Astrophysics Data System (ADS)
Singh, R.; Verma, H. K.
2013-12-01
This paper presents a teaching-learning-based optimization (TLBO) algorithm to solve parameter identification problems in the designing of digital infinite impulse response (IIR) filter. TLBO based filter modelling is applied to calculate the parameters of unknown plant in simulations. Unlike other heuristic search algorithms, TLBO algorithm is an algorithm-specific parameter-less algorithm. In this paper big bang-big crunch (BB-BC) optimization and PSO algorithms are also applied to filter design for comparison. Unknown filter parameters are considered as a vector to be optimized by these algorithms. MATLAB programming is used for implementation of proposed algorithms. Experimental results show that the TLBO is more accurate to estimate the filter parameters than the BB-BC optimization algorithm and has faster convergence rate when compared to PSO algorithm. TLBO is used where accuracy is more essential than the convergence speed.
Enhancements of Bayesian Blocks; Application to Large Light Curve Databases
NASA Technical Reports Server (NTRS)
Scargle, Jeff
2015-01-01
Bayesian Blocks are optimal piecewise linear representations (step function fits) of light-curves. The simple algorithm implementing this idea, using dynamic programming, has been extended to include more data modes and fitness metrics, multivariate analysis, and data on the circle (Studies in Astronomical Time Series Analysis. VI. Bayesian Block Representations, Scargle, Norris, Jackson and Chiang 2013, ApJ, 764, 167), as well as new results on background subtraction and refinement of the procedure for precise timing of transient events in sparse data. Example demonstrations will include exploratory analysis of the Kepler light curve archive in a search for "star-tickling" signals from extraterrestrial civilizations. (The Cepheid Galactic Internet, Learned, Kudritzki, Pakvasa1, and Zee, 2008, arXiv: 0809.0339; Walkowicz et al., in progress).
Hazardous Traffic Event Detection Using Markov Blanket and Sequential Minimal Optimization (MB-SMO)
Yan, Lixin; Zhang, Yishi; He, Yi; Gao, Song; Zhu, Dunyao; Ran, Bin; Wu, Qing
2016-01-01
The ability to identify hazardous traffic events is already considered as one of the most effective solutions for reducing the occurrence of crashes. Only certain particular hazardous traffic events have been studied in previous studies, which were mainly based on dedicated video stream data and GPS data. The objective of this study is twofold: (1) the Markov blanket (MB) algorithm is employed to extract the main factors associated with hazardous traffic events; (2) a model is developed to identify hazardous traffic event using driving characteristics, vehicle trajectory, and vehicle position data. Twenty-two licensed drivers were recruited to carry out a natural driving experiment in Wuhan, China, and multi-sensor information data were collected for different types of traffic events. The results indicated that a vehicle’s speed, the standard deviation of speed, the standard deviation of skin conductance, the standard deviation of brake pressure, turn signal, the acceleration of steering, the standard deviation of acceleration, and the acceleration in Z (G) have significant influences on hazardous traffic events. The sequential minimal optimization (SMO) algorithm was adopted to build the identification model, and the accuracy of prediction was higher than 86%. Moreover, compared with other detection algorithms, the MB-SMO algorithm was ranked best in terms of the prediction accuracy. The conclusions can provide reference evidence for the development of dangerous situation warning products and the design of intelligent vehicles. PMID:27420073