Science.gov

Sample records for algorithm significantly outperforms

  1. Acoustic diagnosis of pulmonary hypertension: automated speech- recognition-inspired classification algorithm outperforms physicians.

    PubMed

    Kaddoura, Tarek; Vadlamudi, Karunakar; Kumar, Shine; Bobhate, Prashant; Guo, Long; Jain, Shreepal; Elgendi, Mohamed; Coe, James Y; Kim, Daniel; Taylor, Dylan; Tymchak, Wayne; Schuurmans, Dale; Zemp, Roger J; Adatia, Ian

    2016-01-01

    We hypothesized that an automated speech- recognition-inspired classification algorithm could differentiate between the heart sounds in subjects with and without pulmonary hypertension (PH) and outperform physicians. Heart sounds, electrocardiograms, and mean pulmonary artery pressures (mPAp) were recorded simultaneously. Heart sound recordings were digitized to train and test speech-recognition-inspired classification algorithms. We used mel-frequency cepstral coefficients to extract features from the heart sounds. Gaussian-mixture models classified the features as PH (mPAp ≥ 25 mmHg) or normal (mPAp < 25 mmHg). Physicians blinded to patient data listened to the same heart sound recordings and attempted a diagnosis. We studied 164 subjects: 86 with mPAp ≥ 25 mmHg (mPAp 41 ± 12 mmHg) and 78 with mPAp < 25 mmHg (mPAp 17 ± 5 mmHg) (p  < 0.005). The correct diagnostic rate of the automated speech-recognition-inspired algorithm was 74% compared to 56% by physicians (p = 0.005). The false positive rate for the algorithm was 34% versus 50% (p = 0.04) for clinicians. The false negative rate for the algorithm was 23% and 68% (p = 0.0002) for physicians. We developed an automated speech-recognition-inspired classification algorithm for the acoustic diagnosis of PH that outperforms physicians that could be used to screen for PH and encourage earlier specialist referral. PMID:27609672

  2. Acoustic diagnosis of pulmonary hypertension: automated speech- recognition-inspired classification algorithm outperforms physicians

    PubMed Central

    Kaddoura, Tarek; Vadlamudi, Karunakar; Kumar, Shine; Bobhate, Prashant; Guo, Long; Jain, Shreepal; Elgendi, Mohamed; Coe, James Y; Kim, Daniel; Taylor, Dylan; Tymchak, Wayne; Schuurmans, Dale; Zemp, Roger J.; Adatia, Ian

    2016-01-01

    We hypothesized that an automated speech- recognition-inspired classification algorithm could differentiate between the heart sounds in subjects with and without pulmonary hypertension (PH) and outperform physicians. Heart sounds, electrocardiograms, and mean pulmonary artery pressures (mPAp) were recorded simultaneously. Heart sound recordings were digitized to train and test speech-recognition-inspired classification algorithms. We used mel-frequency cepstral coefficients to extract features from the heart sounds. Gaussian-mixture models classified the features as PH (mPAp ≥ 25 mmHg) or normal (mPAp < 25 mmHg). Physicians blinded to patient data listened to the same heart sound recordings and attempted a diagnosis. We studied 164 subjects: 86 with mPAp ≥ 25 mmHg (mPAp 41 ± 12 mmHg) and 78 with mPAp < 25 mmHg (mPAp 17 ± 5 mmHg) (p  < 0.005). The correct diagnostic rate of the automated speech-recognition-inspired algorithm was 74% compared to 56% by physicians (p = 0.005). The false positive rate for the algorithm was 34% versus 50% (p = 0.04) for clinicians. The false negative rate for the algorithm was 23% and 68% (p = 0.0002) for physicians. We developed an automated speech-recognition-inspired classification algorithm for the acoustic diagnosis of PH that outperforms physicians that could be used to screen for PH and encourage earlier specialist referral. PMID:27609672

  3. Algorithm for Detecting Significant Locations from Raw GPS Data

    NASA Astrophysics Data System (ADS)

    Kami, Nobuharu; Enomoto, Nobuyuki; Baba, Teruyuki; Yoshikawa, Takashi

    We present a fast algorithm for probabilistically extracting significant locations from raw GPS data based on data point density. Extracting significant locations from raw GPS data is the first essential step of algorithms designed for location-aware applications. Assuming that a location is significant if users spend a certain time around that area, most current algorithms compare spatial/temporal variables, such as stay duration and a roaming diameter, with given fixed thresholds to extract significant locations. However, the appropriate threshold values are not clearly known in priori and algorithms with fixed thresholds are inherently error-prone, especially under high noise levels. Moreover, for N data points, they are generally O(N 2) algorithms since distance computation is required. We developed a fast algorithm for selective data point sampling around significant locations based on density information by constructing random histograms using locality sensitive hashing. Evaluations show competitive performance in detecting significant locations even under high noise levels.

  4. CD4 Count Outperforms World Health Organization Clinical Algorithm for Point-of Care HIV Diagnosis among Hospitalized HIV-exposed Malawian Infants

    PubMed Central

    Maliwichi, Madalitso; Rosenberg, Nora E.; Macfie, Rebekah; Olson, Dan; Hoffman, Irving; van der Horst, Charles M.; Kazembe, Peter N.; Hosseinipour, Mina C.; McCollum, Eric D.

    2014-01-01

    Objective To determine, for the WHO algorithm for point-of-care diagnosis of HIV infection, the agreement levels between pediatricians and non-physician clinicians, and to compare sensitivity and specificity profiles of the WHO algorithm and different CD4 thresholds against HIV PCR testing in hospitalized Malawian infants. Methods In 2011, hospitalized HIV-exposed infants <12 months in Lilongwe, Malawi were evaluated independently with the WHO algorithm by both a pediatrician and clinical officer. Blood was collected for CD4 and molecular HIV testing (DNA or RNA PCR). Using molecular testing as the reference, sensitivity, specificity, and positive predictive value (PPV) were determined for the WHO algorithm and CD4 count thresholds of 1500 and 2000 cells/mm3 by pediatricians and clinical officers. Results We enrolled 166 infants (50% female, 34% <2 months, 37% HIV-infected). Sensitivity was higher using CD4 thresholds (<1500, 80%; <2000, 95%) than with the algorithm (physicians, 57%; clinical officers, 71%). Specificity was comparable for CD4 thresholds (<1500, 68%, <2000, 50%) and the algorithm (pediatricians, 55%, clinical officers, 50%). The positive predictive values were slightly better using CD4 thresholds (<1500, 59%, <2000, 52%) than the algorithm (pediatricians, 43%, clinical officers 45%) at this prevalence. Conclusion Performance by the WHO algorithm and CD4 thresholds resulted in many misclassifications. Point-of-care CD4 thresholds of <1500 cells/mm3 or <2000 cells/mm3 could identify more HIV-infected infants with fewer false positives than the algorithm. However, a point-of-care option with better performance characteristics is needed for accurate, timely HIV diagnosis. PMID:24754543

  5. Discovering simple DNA sequences by the algorithmic significance method.

    PubMed

    Milosavljević, A; Jurka, J

    1993-08-01

    A new method, 'algorithmic significance', is proposed as a tool for discovery of patterns in DNA sequences. The main idea is that patterns can be discovered by finding ways to encode the observed data concisely. In this sense, the method can be viewed as a formal version of the Occam's Razor principle. In this paper the method is applied to discover significantly simple DNA sequences. We define DNA sequences to be simple if they contain repeated occurrences of certain 'words' and thus can be encoded in a small number of bits. Such definition includes minisatellites and microsatellites. A standard dynamic programming algorithm for data compression is applied to compute the minimal encoding lengths of sequences in linear time. An electronic mail server for identification of simple sequences based on the proposed method has been installed at the Internet address pythia/anl.gov. PMID:8402207

  6. Algorithms for Detecting Significantly Mutated Pathways in Cancer

    NASA Astrophysics Data System (ADS)

    Vandin, Fabio; Upfal, Eli; Raphael, Benjamin J.

    Recent genome sequencing studies have shown that the somatic mutations that drive cancer development are distributed across a large number of genes. This mutational heterogeneity complicates efforts to distinguish functional mutations from sporadic, passenger mutations. Since cancer mutations are hypothesized to target a relatively small number of cellular signaling and regulatory pathways, a common approach is to assess whether known pathways are enriched for mutated genes. However, restricting attention to known pathways will not reveal novel cancer genes or pathways. An alterative strategy is to examine mutated genes in the context of genome-scale interaction networks that include both well characterized pathways and additional gene interactions measured through various approaches. We introduce a computational framework for de novo identification of subnetworks in a large gene interaction network that are mutated in a significant number of patients. This framework includes two major features. First, we introduce a diffusion process on the interaction network to define a local neighborhood of "influence" for each mutated gene in the network. Second, we derive a two-stage multiple hypothesis test to bound the false discovery rate (FDR) associated with the identified subnetworks. We test these algorithms on a large human protein-protein interaction network using mutation data from two recent studies: glioblastoma samples from The Cancer Genome Atlas and lung adenocarcinoma samples from the Tumor Sequencing Project. We successfully recover pathways that are known to be important in these cancers, such as the p53 pathway. We also identify additional pathways, such as the Notch signaling pathway, that have been implicated in other cancers but not previously reported as mutated in these samples. Our approach is the first, to our knowledge, to demonstrate a computationally efficient strategy for de novo identification of statistically significant mutated subnetworks. We

  7. A Hybrid Swarm Intelligence Algorithm for Intrusion Detection Using Significant Features

    PubMed Central

    Amudha, P.; Karthik, S.; Sivakumari, S.

    2015-01-01

    Intrusion detection has become a main part of network security due to the huge number of attacks which affects the computers. This is due to the extensive growth of internet connectivity and accessibility to information systems worldwide. To deal with this problem, in this paper a hybrid algorithm is proposed to integrate Modified Artificial Bee Colony (MABC) with Enhanced Particle Swarm Optimization (EPSO) to predict the intrusion detection problem. The algorithms are combined together to find out better optimization results and the classification accuracies are obtained by 10-fold cross-validation method. The purpose of this paper is to select the most relevant features that can represent the pattern of the network traffic and test its effect on the success of the proposed hybrid classification algorithm. To investigate the performance of the proposed method, intrusion detection KDDCup'99 benchmark dataset from the UCI Machine Learning repository is used. The performance of the proposed method is compared with the other machine learning algorithms and found to be significantly different. PMID:26221625

  8. A Hybrid Swarm Intelligence Algorithm for Intrusion Detection Using Significant Features.

    PubMed

    Amudha, P; Karthik, S; Sivakumari, S

    2015-01-01

    Intrusion detection has become a main part of network security due to the huge number of attacks which affects the computers. This is due to the extensive growth of internet connectivity and accessibility to information systems worldwide. To deal with this problem, in this paper a hybrid algorithm is proposed to integrate Modified Artificial Bee Colony (MABC) with Enhanced Particle Swarm Optimization (EPSO) to predict the intrusion detection problem. The algorithms are combined together to find out better optimization results and the classification accuracies are obtained by 10-fold cross-validation method. The purpose of this paper is to select the most relevant features that can represent the pattern of the network traffic and test its effect on the success of the proposed hybrid classification algorithm. To investigate the performance of the proposed method, intrusion detection KDDCup'99 benchmark dataset from the UCI Machine Learning repository is used. The performance of the proposed method is compared with the other machine learning algorithms and found to be significantly different. PMID:26221625

  9. Significant Advances in the AIRS Science Team Version-6 Retrieval Algorithm

    NASA Technical Reports Server (NTRS)

    Susskind, Joel; Blaisdell, John; Iredell, Lena; Molnar, Gyula

    2012-01-01

    AIRS/AMSU is the state of the art infrared and microwave atmospheric sounding system flying aboard EOS Aqua. The Goddard DISC has analyzed AIRS/AMSU observations, covering the period September 2002 until the present, using the AIRS Science Team Version-S retrieval algorithm. These products have been used by many researchers to make significant advances in both climate and weather applications. The AIRS Science Team Version-6 Retrieval, which will become operation in mid-20l2, contains many significant theoretical and practical improvements compared to Version-5 which should further enhance the utility of AIRS products for both climate and weather applications. In particular, major changes have been made with regard to the algOrithms used to 1) derive surface skin temperature and surface spectral emissivity; 2) generate the initial state used to start the retrieval procedure; 3) compute Outgoing Longwave Radiation; and 4) determine Quality Control. This paper will describe these advances found in the AIRS Version-6 retrieval algorithm and demonstrate the improvement of AIRS Version-6 products compared to those obtained using Version-5,

  10. Statistically significant performance results of a mine detector and fusion algorithm from an x-band high-resolution SAR

    NASA Astrophysics Data System (ADS)

    Williams, Arnold C.; Pachowicz, Peter W.

    2004-09-01

    Current mine detection research indicates that no single sensor or single look from a sensor will detect mines/minefields in a real-time manner at a performance level suitable for a forward maneuver unit. Hence, the integrated development of detectors and fusion algorithms are of primary importance. A problem in this development process has been the evaluation of these algorithms with relatively small data sets, leading to anecdotal and frequently over trained results. These anecdotal results are often unreliable and conflicting among various sensors and algorithms. Consequently, the physical phenomena that ought to be exploited and the performance benefits of this exploitation are often ambiguous. The Army RDECOM CERDEC Night Vision Laboratory and Electron Sensors Directorate has collected large amounts of multisensor data such that statistically significant evaluations of detection and fusion algorithms can be obtained. Even with these large data sets care must be taken in algorithm design and data processing to achieve statistically significant performance results for combined detectors and fusion algorithms. This paper discusses statistically significant detection and combined multilook fusion results for the Ellipse Detector (ED) and the Piecewise Level Fusion Algorithm (PLFA). These statistically significant performance results are characterized by ROC curves that have been obtained through processing this multilook data for the high resolution SAR data of the Veridian X-Band radar. We discuss the implications of these results on mine detection and the importance of statistical significance, sample size, ground truth, and algorithm design in performance evaluation.

  11. A clinical algorithm for triaging patients with significant lymphadenopathy in primary health care settings in Sudan

    PubMed Central

    El Hag, Imad A.; Elsiddig, Kamal E.; Elsafi, Mohamed E.M.O; Elfaki, Mona E.E.; Musa, Ahmed M.; Musa, Brima Y.; Elhassan, Ahmed M.

    2013-01-01

    Abstract Background Tuberculosis is a major health problem in developing countries. The distinction between tuberculous lymphadenitis, non-specific lymphadenitis and malignant lymph node enlargement has to be made at primary health care levels using easy, simple and cheap methods. Objective To develop a reliable clinical algorithm for primary care settings to triage cases of non-specific, tuberculous and malignant lymphadenopathies. Methods Calculation of the odd ratios (OR) of the chosen predictor variables was carried out using logistic regression. The numerical score values of the predictor variables were weighed against their respective OR. The performance of the score was evaluated by the ROC (Receiver Operator Characteristic) curve. Results Four predictor variables; Mantoux reading, erythrocytes sedimentation rate (ESR), nocturnal fever and discharging sinuses correlated significantly with TB diagnosis and were included in the reduced model to establish score A. For score B, the reduced model included Mantoux reading, ESR, lymph-node size and lymph-node number as predictor variables for malignant lymph nodes. Score A ranged 0 to 12 and a cut-off point of 6 gave a best sensitivity and specificity of 91% and 90% respectively, whilst score B ranged -3 to 8 and a cut-off point of 3 gave a best sensitivity and specificity of 83% and 76% respectively. The calculated area under the ROC curve was 0.964 (95% CI, 0.949 – 0.980) and -0.856 (95% CI, 0.787 - 0.925) for scores A and B respectively, indicating good performance. Conclusion The developed algorithm can efficiently triage cases with tuberculous and malignant lymphadenopathies for treatment or referral to specialised centres for further work-up.

  12. A hybrid genetic algorithm-extreme learning machine approach for accurate significant wave height reconstruction

    NASA Astrophysics Data System (ADS)

    Alexandre, E.; Cuadra, L.; Nieto-Borge, J. C.; Candil-García, G.; del Pino, M.; Salcedo-Sanz, S.

    2015-08-01

    Wave parameters computed from time series measured by buoys (significant wave height Hs, mean wave period, etc.) play a key role in coastal engineering and in the design and operation of wave energy converters. Storms or navigation accidents can make measuring buoys break down, leading to missing data gaps. In this paper we tackle the problem of locally reconstructing Hs at out-of-operation buoys by using wave parameters from nearby buoys, based on the spatial correlation among values at neighboring buoy locations. The novelty of our approach for its potential application to problems in coastal engineering is twofold. On one hand, we propose a genetic algorithm hybridized with an extreme learning machine that selects, among the available wave parameters from the nearby buoys, a subset FnSP with nSP parameters that minimizes the Hs reconstruction error. On the other hand, we evaluate to what extent the selected parameters in subset FnSP are good enough in assisting other machine learning (ML) regressors (extreme learning machines, support vector machines and gaussian process regression) to reconstruct Hs. The results show that all the ML method explored achieve a good Hs reconstruction in the two different locations studied (Caribbean Sea and West Atlantic).

  13. A Genetic Algorithm for Learning Significant Phrase Patterns in Radiology Reports

    SciTech Connect

    Patton, Robert M; Potok, Thomas E; Beckerman, Barbara G; Treadwell, Jim N

    2009-01-01

    Radiologists disagree with each other over the characteristics and features of what constitutes a normal mammogram and the terminology to use in the associated radiology report. Recently, the focus has been on classifying abnormal or suspicious reports, but even this process needs further layers of clustering and gradation, so that individual lesions can be more effectively classified. Using a genetic algorithm, the approach described here successfully learns phrase patterns for two distinct classes of radiology reports (normal and abnormal). These patterns can then be used as a basis for automatically analyzing, categorizing, clustering, or retrieving relevant radiology reports for the user.

  14. Digital image analysis outperforms manual biomarker assessment in breast cancer.

    PubMed

    Stålhammar, Gustav; Fuentes Martinez, Nelson; Lippert, Michael; Tobin, Nicholas P; Mølholm, Ida; Kis, Lorand; Rosin, Gustaf; Rantalainen, Mattias; Pedersen, Lars; Bergh, Jonas; Grunkin, Michael; Hartman, Johan

    2016-04-01

    In the spectrum of breast cancers, categorization according to the four gene expression-based subtypes 'Luminal A,' 'Luminal B,' 'HER2-enriched,' and 'Basal-like' is the method of choice for prognostic and predictive value. As gene expression assays are not yet universally available, routine immunohistochemical stains act as surrogate markers for these subtypes. Thus, congruence of surrogate markers and gene expression tests is of utmost importance. In this study, 3 cohorts of primary breast cancer specimens (total n=436) with up to 28 years of survival data were scored for Ki67, ER, PR, and HER2 status manually and by digital image analysis (DIA). The results were then compared for sensitivity and specificity for the Luminal B subtype, concordance to PAM50 assays in subtype classification and prognostic power. The DIA system used was the Visiopharm Integrator System. DIA outperformed manual scoring in terms of sensitivity and specificity for the Luminal B subtype, widely considered the most challenging distinction in surrogate subclassification, and produced slightly better concordance and Cohen's κ agreement with PAM50 gene expression assays. Manual biomarker scores and DIA essentially matched each other for Cox regression hazard ratios for all-cause mortality. When the Nottingham combined histologic grade (Elston-Ellis) was used as a prognostic surrogate, stronger Spearman's rank-order correlations were produced by DIA. Prognostic value of Ki67 scores in terms of likelihood ratio χ(2) (LR χ(2)) was higher for DIA that also added significantly more prognostic information to the manual scores (LR-Δχ(2)). In conclusion, the system for DIA evaluated here was in most aspects a superior alternative to manual biomarker scoring. It also has the potential to reduce time consumption for pathologists, as many of the steps in the workflow are either automatic or feasible to manage without pathological expertise. PMID:26916072

  15. Testing earthquake prediction algorithms: Statistically significant advance prediction of the largest earthquakes in the Circum-Pacific, 1992-1997

    USGS Publications Warehouse

    Kossobokov, V.G.; Romashkova, L.L.; Keilis-Borok, V. I.; Healy, J.H.

    1999-01-01

    Algorithms M8 and MSc (i.e., the Mendocino Scenario) were used in a real-time intermediate-term research prediction of the strongest earthquakes in the Circum-Pacific seismic belt. Predictions are made by M8 first. Then, the areas of alarm are reduced by MSc at the cost that some earthquakes are missed in the second approximation of prediction. In 1992-1997, five earthquakes of magnitude 8 and above occurred in the test area: all of them were predicted by M8 and MSc identified correctly the locations of four of them. The space-time volume of the alarms is 36% and 18%, correspondingly, when estimated with a normalized product measure of empirical distribution of epicenters and uniform time. The statistical significance of the achieved results is beyond 99% both for M8 and MSc. For magnitude 7.5 + , 10 out of 19 earthquakes were predicted by M8 in 40% and five were predicted by M8-MSc in 13% of the total volume considered. This implies a significance level of 81% for M8 and 92% for M8-MSc. The lower significance levels might result from a global change in seismic regime in 1993-1996, when the rate of the largest events has doubled and all of them become exclusively normal or reversed faults. The predictions are fully reproducible; the algorithms M8 and MSc in complete formal definitions were published before we started our experiment [Keilis-Borok, V.I., Kossobokov, V.G., 1990. Premonitory activation of seismic flow: Algorithm M8, Phys. Earth and Planet. Inter. 61, 73-83; Kossobokov, V.G., Keilis-Borok, V.I., Smith, S.W., 1990. Localization of intermediate-term earthquake prediction, J. Geophys. Res., 95, 19763-19772; Healy, J.H., Kossobokov, V.G., Dewey, J.W., 1992. A test to evaluate the earthquake prediction algorithm, M8. U.S. Geol. Surv. OFR 92-401]. M8 is available from the IASPEI Software Library [Healy, J.H., Keilis-Borok, V.I., Lee, W.H.K. (Eds.), 1997. Algorithms for Earthquake Statistics and Prediction, Vol. 6. IASPEI Software Library]. ?? 1999 Elsevier

  16. Analyzing Enron Data: Bitmap Indexing Outperforms MySQL Queries bySeveral Orders of Magnitude

    SciTech Connect

    Stockinger, Kurt; Rotem, Doron; Shoshani, Arie; Wu, Kesheng

    2006-01-28

    FastBit is an efficient, compressed bitmap indexing technology that was developed in our group. In this report we evaluate the performance of MySQL and FastBit for analyzing the email traffic of the Enron dataset. The first finding shows that materializing the join results of several tables significantly improves the query performance. The second finding shows that FastBit outperforms MySQL by several orders of magnitude.

  17. Classification of Non-Small Cell Lung Cancer Using Significance Analysis of Microarray-Gene Set Reduction Algorithm

    PubMed Central

    Zhang, Lei; Wang, Linlin; Du, Bochuan; Wang, Tianjiao; Tian, Pu

    2016-01-01

    Among non-small cell lung cancer (NSCLC), adenocarcinoma (AC), and squamous cell carcinoma (SCC) are two major histology subtypes, accounting for roughly 40% and 30% of all lung cancer cases, respectively. Since AC and SCC differ in their cell of origin, location within the lung, and growth pattern, they are considered as distinct diseases. Gene expression signatures have been demonstrated to be an effective tool for distinguishing AC and SCC. Gene set analysis is regarded as irrelevant to the identification of gene expression signatures. Nevertheless, we found that one specific gene set analysis method, significance analysis of microarray-gene set reduction (SAMGSR), can be adopted directly to select relevant features and to construct gene expression signatures. In this study, we applied SAMGSR to a NSCLC gene expression dataset. When compared with several novel feature selection algorithms, for example, LASSO, SAMGSR has equivalent or better performance in terms of predictive ability and model parsimony. Therefore, SAMGSR is a feature selection algorithm, indeed. Additionally, we applied SAMGSR to AC and SCC subtypes separately to discriminate their respective stages, that is, stage II versus stage I. Few overlaps between these two resulting gene signatures illustrate that AC and SCC are technically distinct diseases. Therefore, stratified analyses on subtypes are recommended when diagnostic or prognostic signatures of these two NSCLC subtypes are constructed. PMID:27446945

  18. The ontogeny of human point following in dogs: When younger dogs outperform older.

    PubMed

    Zaine, Isabela; Domeniconi, Camila; Wynne, Clive D L

    2015-10-01

    We investigated puppies' responsiveness to hand points differing in salience. Experiment 1 compared performance of younger (8 weeks old) and older (12 weeks) shelter pups in following pointing gestures. We hypothesized that older puppies would show better performance. Both groups followed the easy and moderate but not the difficult pointing cues. Surprisingly, the younger pups outperformed the older ones in following the moderate and difficult points. Investigation of subjects' backgrounds revealed that significantly more younger pups had experience living in human homes than did the older pups. Thus, we conducted a second experiment to isolate the variable experience. We collected additional data from older pet pups living in human homes on the same three point types and compared their performance with the shelter pups from Experiment 1. The pups living in homes accurately followed all three pointing cues. When comparing both experienced groups, the older pet pups outperformed the younger shelter ones, as predicted. When comparing the two same-age groups differing in background experience, the pups living in homes outperformed the shelter pups. A significant correlation between experience with humans and success in following less salient cues was found. The importance of ontogenetic learning in puppies' responsiveness to certain human social cues is discussed. PMID:26192336

  19. A SAT Based Effective Algorithm for the Directed Hamiltonian Cycle Problem

    NASA Astrophysics Data System (ADS)

    Jäger, Gerold; Zhang, Weixiong

    The Hamiltonian cycle problem (HCP) is an important combinatorial problem with applications in many areas. While thorough theoretical and experimental analyses have been made on the HCP in undirected graphs, little is known for the HCP in directed graphs (DHCP). The contribution of this work is an effective algorithm for the DHCP. Our algorithm explores and exploits the close relationship between the DHCP and the Assignment Problem (AP) and utilizes a technique based on Boolean satisfiability (SAT). By combining effective algorithms for the AP and SAT, our algorithm significantly outperforms previous exact DHCP algorithms including an algorithm based on the award-winning Concorde TSP algorithm.

  20. Extortion can outperform generosity in the iterated prisoner's dilemma.

    PubMed

    Wang, Zhijian; Zhou, Yanran; Lien, Jaimie W; Zheng, Jie; Xu, Bin

    2016-01-01

    Zero-determinant (ZD) strategies, as discovered by Press and Dyson, can enforce a linear relationship between a pair of players' scores in the iterated prisoner's dilemma. Particularly, the extortionate ZD strategies can enforce and exploit cooperation, providing a player with a score advantage, and consequently higher scores than those from either mutual cooperation or generous ZD strategies. In laboratory experiments in which human subjects were paired with computer co-players, we demonstrate that both the generous and the extortionate ZD strategies indeed enforce a unilateral control of the reward. When the experimental setting is sufficiently long and the computerized nature of the opponent is known to human subjects, the extortionate strategy outperforms the generous strategy. Human subjects' cooperation rates when playing against extortionate and generous ZD strategies are similar after learning has occurred. More than half of extortionate strategists finally obtain an average score higher than that from mutual cooperation. PMID:27067513

  1. Dodecylresorufin (C12R) Outperforms Resorufin in Microdroplet Bacterial Assays.

    PubMed

    Scheler, Ott; Kaminski, Tomasz S; Ruszczak, Artur; Garstecki, Piotr

    2016-05-11

    This paper proves that dodecylresorufin (C12R) outperforms resorufin (the conventional form of this dye) in droplet microfluidic bacterial assays. Resorufin is a marker dye that is widely used in different fields of microbiology and has increasingly been applied in droplet microfluidic assays and experiments. The main concern associated with resorufin in droplet-based systems is dye leakage into the oil phase and neighboring droplets. The leakage decreases the performance of assays because it causes averaging of the signal between the positive (bacteria-containing) and negative (empty) droplets. Here we show that C12R is a promising alternative to conventional resorufin because it maintains higher sensitivity, specificity, and signal-to-noise ratio over time. These characteristics make C12R a suitable reagent for droplet digital assays and for monitoring of microbial growth in droplets. PMID:27100211

  2. Better than Nature: Nicotinamide Biomimetics That Outperform Natural Coenzymes.

    PubMed

    Knaus, Tanja; Paul, Caroline E; Levy, Colin W; de Vries, Simon; Mutti, Francesco G; Hollmann, Frank; Scrutton, Nigel S

    2016-01-27

    The search for affordable, green biocatalytic processes is a challenge for chemicals manufacture. Redox biotransformations are potentially attractive, but they rely on unstable and expensive nicotinamide coenzymes that have prevented their widespread exploitation. Stoichiometric use of natural coenzymes is not viable economically, and the instability of these molecules hinders catalytic processes that employ coenzyme recycling. Here, we investigate the efficiency of man-made synthetic biomimetics of the natural coenzymes NAD(P)H in redox biocatalysis. Extensive studies with a range of oxidoreductases belonging to the "ene" reductase family show that these biomimetics are excellent analogues of the natural coenzymes, revealed also in crystal structures of the ene reductase XenA with selected biomimetics. In selected cases, these biomimetics outperform the natural coenzymes. "Better-than-Nature" biomimetics should find widespread application in fine and specialty chemicals production by harnessing the power of high stereo-, regio-, and chemoselective redox biocatalysts and enabling reactions under mild conditions at low cost. PMID:26727612

  3. Extortion can outperform generosity in the iterated prisoner's dilemma

    PubMed Central

    Wang, Zhijian; Zhou, Yanran; Lien, Jaimie W.; Zheng, Jie; Xu, Bin

    2016-01-01

    Zero-determinant (ZD) strategies, as discovered by Press and Dyson, can enforce a linear relationship between a pair of players' scores in the iterated prisoner's dilemma. Particularly, the extortionate ZD strategies can enforce and exploit cooperation, providing a player with a score advantage, and consequently higher scores than those from either mutual cooperation or generous ZD strategies. In laboratory experiments in which human subjects were paired with computer co-players, we demonstrate that both the generous and the extortionate ZD strategies indeed enforce a unilateral control of the reward. When the experimental setting is sufficiently long and the computerized nature of the opponent is known to human subjects, the extortionate strategy outperforms the generous strategy. Human subjects' cooperation rates when playing against extortionate and generous ZD strategies are similar after learning has occurred. More than half of extortionate strategists finally obtain an average score higher than that from mutual cooperation. PMID:27067513

  4. Automated facial coding software outperforms people in recognizing neutral faces as neutral from standardized datasets

    PubMed Central

    Lewinski, Peter

    2015-01-01

    Little is known about people’s accuracy of recognizing neutral faces as neutral. In this paper, I demonstrate the importance of knowing how well people recognize neutral faces. I contrasted human recognition scores of 100 typical, neutral front-up facial images with scores of an arguably objective judge – automated facial coding (AFC) software. I hypothesized that the software would outperform humans in recognizing neutral faces because of the inherently objective nature of computer algorithms. Results confirmed this hypothesis. I provided the first-ever evidence that computer software (90%) was more accurate in recognizing neutral faces than people were (59%). I posited two theoretical mechanisms, i.e., smile-as-a-baseline and false recognition of emotion, as possible explanations for my findings. PMID:26441761

  5. Smiling on the Inside: The Social Benefits of Suppressing Positive Emotions in Outperformance Situations.

    PubMed

    Schall, Marina; Martiny, Sarah E; Goetz, Thomas; Hall, Nathan C

    2016-05-01

    Although expressing positive emotions is typically socially rewarded, in the present work, we predicted that people suppress positive emotions and thereby experience social benefits when outperformed others are present. We tested our predictions in three experimental studies with high school students. In Studies 1 and 2, we manipulated the type of social situation (outperformance vs. non-outperformance) and assessed suppression of positive emotions. In both studies, individuals reported suppressing positive emotions more in outperformance situations than in non-outperformance situations. In Study 3, we manipulated the social situation (outperformance vs. non-outperformance) as well as the videotaped person's expression of positive emotions (suppression vs. expression). The findings showed that when outperforming others, individuals were indeed evaluated more positively when they suppressed rather than expressed their positive emotions, and demonstrate the importance of the specific social situation with respect to the effects of suppression. PMID:27029576

  6. A Matter of Timing: Identifying Significant Multi-Dose Radiotherapy Improvements by Numerical Simulation and Genetic Algorithm Search

    PubMed Central

    Angus, Simon D.; Piotrowska, Monika Joanna

    2014-01-01

    Multi-dose radiotherapy protocols (fraction dose and timing) currently used in the clinic are the product of human selection based on habit, received wisdom, physician experience and intra-day patient timetabling. However, due to combinatorial considerations, the potential treatment protocol space for a given total dose or treatment length is enormous, even for relatively coarse search; well beyond the capacity of traditional in-vitro methods. In constrast, high fidelity numerical simulation of tumor development is well suited to the challenge. Building on our previous single-dose numerical simulation model of EMT6/Ro spheroids, a multi-dose irradiation response module is added and calibrated to the effective dose arising from 18 independent multi-dose treatment programs available in the experimental literature. With the developed model a constrained, non-linear, search for better performing cadidate protocols is conducted within the vicinity of two benchmarks by genetic algorithm (GA) techniques. After evaluating less than 0.01% of the potential benchmark protocol space, candidate protocols were identified by the GA which conferred an average of 9.4% (max benefit 16.5%) and 7.1% (13.3%) improvement (reduction) on tumour cell count compared to the two benchmarks, respectively. Noticing that a convergent phenomenon of the top performing protocols was their temporal synchronicity, a further series of numerical experiments was conducted with periodic time-gap protocols (10 h to 23 h), leading to the discovery that the performance of the GA search candidates could be replicated by 17–18 h periodic candidates. Further dynamic irradiation-response cell-phase analysis revealed that such periodicity cohered with latent EMT6/Ro cell-phase temporal patterning. Taken together, this study provides powerful evidence towards the hypothesis that even simple inter-fraction timing variations for a given fractional dose program may present a facile, and highly cost

  7. Better than Nature: Nicotinamide Biomimetics That Outperform Natural Coenzymes

    PubMed Central

    2016-01-01

    The search for affordable, green biocatalytic processes is a challenge for chemicals manufacture. Redox biotransformations are potentially attractive, but they rely on unstable and expensive nicotinamide coenzymes that have prevented their widespread exploitation. Stoichiometric use of natural coenzymes is not viable economically, and the instability of these molecules hinders catalytic processes that employ coenzyme recycling. Here, we investigate the efficiency of man-made synthetic biomimetics of the natural coenzymes NAD(P)H in redox biocatalysis. Extensive studies with a range of oxidoreductases belonging to the “ene” reductase family show that these biomimetics are excellent analogues of the natural coenzymes, revealed also in crystal structures of the ene reductase XenA with selected biomimetics. In selected cases, these biomimetics outperform the natural coenzymes. “Better-than-Nature” biomimetics should find widespread application in fine and specialty chemicals production by harnessing the power of high stereo-, regio-, and chemoselective redox biocatalysts and enabling reactions under mild conditions at low cost. PMID:26727612

  8. Adult vultures outperform juveniles in challenging thermal soaring conditions

    PubMed Central

    Harel, Roi; Horvitz, Nir; Nathan, Ran

    2016-01-01

    Due to the potentially detrimental consequences of low performance in basic functional tasks, individuals are expected to improve performance with age and show the most marked changes during early stages of life. Soaring-gliding birds use rising-air columns (thermals) to reduce energy expenditure allocated to flight. We offer a framework to evaluate thermal soaring performance, and use GPS-tracking to study movements of Eurasian griffon vultures (Gyps fulvus). Because the location and intensity of thermals are variable, we hypothesized that soaring performance would improve with experience and predicted that the performance of inexperienced individuals (<2 months) would be inferior to that of experienced ones (>5 years). No differences were found in body characteristics, climb rates under low wind shear, and thermal selection, presumably due to vultures’ tendency to forage in mixed-age groups. Adults, however, outperformed juveniles in their ability to adjust fine-scale movements under challenging conditions, as juveniles had lower climb rates under intermediate wind shear, particularly on the lee-side of thermal columns. Juveniles were also less efficient along the route both in terms of time and energy. The consequences of these handicaps are probably exacerbated if juveniles lag behind adults in finding and approaching food. PMID:27291590

  9. Adult vultures outperform juveniles in challenging thermal soaring conditions.

    PubMed

    Harel, Roi; Horvitz, Nir; Nathan, Ran

    2016-01-01

    Due to the potentially detrimental consequences of low performance in basic functional tasks, individuals are expected to improve performance with age and show the most marked changes during early stages of life. Soaring-gliding birds use rising-air columns (thermals) to reduce energy expenditure allocated to flight. We offer a framework to evaluate thermal soaring performance, and use GPS-tracking to study movements of Eurasian griffon vultures (Gyps fulvus). Because the location and intensity of thermals are variable, we hypothesized that soaring performance would improve with experience and predicted that the performance of inexperienced individuals (<2 months) would be inferior to that of experienced ones (>5 years). No differences were found in body characteristics, climb rates under low wind shear, and thermal selection, presumably due to vultures' tendency to forage in mixed-age groups. Adults, however, outperformed juveniles in their ability to adjust fine-scale movements under challenging conditions, as juveniles had lower climb rates under intermediate wind shear, particularly on the lee-side of thermal columns. Juveniles were also less efficient along the route both in terms of time and energy. The consequences of these handicaps are probably exacerbated if juveniles lag behind adults in finding and approaching food. PMID:27291590

  10. Using edge-preserving algorithm with non-local mean for significantly improved image-domain material decomposition in dual-energy CT

    NASA Astrophysics Data System (ADS)

    Zhao, Wei; Niu, Tianye; Xing, Lei; Xie, Yaoqin; Xiong, Guanglei; Elmore, Kimberly; Zhu, Jun; Wang, Luyao; Min, James K.

    2016-02-01

    Increased noise is a general concern for dual-energy material decomposition. Here, we develop an image-domain material decomposition algorithm for dual-energy CT (DECT) by incorporating an edge-preserving filter into the Local HighlY constrained backPRojection reconstruction (HYPR-LR) framework. With effective use of the non-local mean, the proposed algorithm, which is referred to as HYPR-NLM, reduces the noise in dual-energy decomposition while preserving the accuracy of quantitative measurement and spatial resolution of the material-specific dual-energy images. We demonstrate the noise reduction and resolution preservation of the algorithm with an iodine concentrate numerical phantom by comparing the HYPR-NLM algorithm to the direct matrix inversion, HYPR-LR and iterative image-domain material decomposition (Iter-DECT). We also show the superior performance of the HYPR-NLM over the existing methods by using two sets of cardiac perfusing imaging data. The DECT material decomposition comparison study shows that all four algorithms yield acceptable quantitative measurements of iodine concentrate. Direct matrix inversion yields the highest noise level, followed by HYPR-LR and Iter-DECT. HYPR-NLM in an iterative formulation significantly reduces image noise and the image noise is comparable to or even lower than that generated using Iter-DECT. For the HYPR-NLM method, there are marginal edge effects in the difference image, suggesting the high-frequency details are well preserved. In addition, when the search window size increases from 11× 11 to 19× 19 , there are no significant changes or marginal edge effects in the HYPR-NLM difference images. The reference drawn from the comparison study includes: (1) HYPR-NLM significantly reduces the DECT material decomposition noise while preserving quantitative measurements and high-frequency edge information, and (2) HYPR-NLM is robust with respect to parameter selection.

  11. A novel bit-quad-based Euler number computing algorithm.

    PubMed

    Yao, Bin; He, Lifeng; Kang, Shiying; Chao, Yuyan; Zhao, Xiao

    2015-01-01

    The Euler number of a binary image is an important topological property in computer vision and pattern recognition. This paper proposes a novel bit-quad-based Euler number computing algorithm. Based on graph theory and analysis on bit-quad patterns, our algorithm only needs to count two bit-quad patterns. Moreover, by use of the information obtained during processing the previous bit-quad, the average number of pixels to be checked for processing a bit-quad is only 1.75. Experimental results demonstrated that our method outperforms significantly conventional Euler number computing algorithms. PMID:26636023

  12. Do new wipe materials outperform traditional lead dust cleaning methods?

    PubMed

    Lewis, Roger D; Ong, Kee Hean; Emo, Brett; Kennedy, Jason; Brown, Christopher A; Condoor, Sridhar; Thummalakunta, Laxmi

    2012-01-01

    Government guidelines have traditionally recommended the use of wet mopping, sponging, or vacuuming for removal of lead-contaminated dust from hard surfaces in homes. The emergence of new technologies, such as the electrostatic dry cloth and wet disposable clothes used on mopheads, for removal of dust provides an opportunity to evaluate their ability to remove lead compared with more established methods. The purpose of this study was to determine if relative differences exist between two new and two older methods for removal of lead-contaminated dust (LCD) from three wood surfaces that were characterized by different roughness or texture. Standard leaded dust, <75 μm, was deposited by gravity onto the wood specimens. Specimens were cleaned using an automated device. Electrostatic dry cloths (dry Swiffer), wet Swiffer cloths, paper shop towels with non-ionic detergent, and vacuuming were used for cleaning LCD from the specimens. Lead analysis was by anodic stripping voltammetry. After the cleaning study was conducted, a study of the coefficient of friction was performed for each wipe material. Analysis of variance was used to evaluate the surface and cleaning methods. There were significant interactions between cleaning method and surface types, p = 0.007. Cleaning method was found be a significant factor in removal of lead, p <0.001, indicating that effectiveness of each cleaning methods is different. However, cleaning was not affected by types of surfaces. The coefficient of friction, significantly different among the three wipes, is likely to influence the cleaning action. Cleaning method appears to be more important than texture in LCD removal from hard surfaces. There are some small but important factors in cleaning LCD from hard surfaces, including the limits of a Swiffer mop to conform to curved surfaces and the efficiency of the wetted shop towel and vacuuming for cleaning all surface textures. The mean percentage reduction in lead dust achieved by the

  13. Fast neuromimetic object recognition using FPGA outperforms GPU implementations.

    PubMed

    Orchard, Garrick; Martin, Jacob G; Vogelstein, R Jacob; Etienne-Cummings, Ralph

    2013-08-01

    Recognition of objects in still images has traditionally been regarded as a difficult computational problem. Although modern automated methods for visual object recognition have achieved steadily increasing recognition accuracy, even the most advanced computational vision approaches are unable to obtain performance equal to that of humans. This has led to the creation of many biologically inspired models of visual object recognition, among them the hierarchical model and X (HMAX) model. HMAX is traditionally known to achieve high accuracy in visual object recognition tasks at the expense of significant computational complexity. Increasing complexity, in turn, increases computation time, reducing the number of images that can be processed per unit time. In this paper we describe how the computationally intensive and biologically inspired HMAX model for visual object recognition can be modified for implementation on a commercial field-programmable aate Array, specifically the Xilinx Virtex 6 ML605 evaluation board with XC6VLX240T FPGA. We show that with minor modifications to the traditional HMAX model we can perform recognition on images of size 128 × 128 pixels at a rate of 190 images per second with a less than 1% loss in recognition accuracy in both binary and multiclass visual object recognition tasks. PMID:24808564

  14. Surface hopping outperforms secular Redfield theory when reorganization energies range from small to moderate (and nuclei are classical)

    SciTech Connect

    Landry, Brian R. Subotnik, Joseph E.

    2015-03-14

    We evaluate the accuracy of Tully’s surface hopping algorithm for the spin-boson model in the limit of small to moderate reorganization energy. We calculate transition rates between diabatic surfaces in the exciton basis and compare against exact results from the hierarchical equations of motion; we also compare against approximate rates from the secular Redfield equation and Ehrenfest dynamics. We show that decoherence-corrected surface hopping performs very well in this regime, agreeing with secular Redfield theory for very weak system-bath coupling and outperforming secular Redfield theory for moderate system-bath coupling. Surface hopping can also be extended beyond the Markovian limits of standard Redfield theory. Given previous work [B. R. Landry and J. E. Subotnik, J. Chem. Phys. 137, 22A513 (2012)] that establishes the accuracy of decoherence-corrected surface-hopping in the Marcus regime, this work suggests that surface hopping may well have a very wide range of applicability.

  15. Pattern recognition control outperforms conventional myoelectric control in upper limb patients with targeted muscle reinnervation.

    PubMed

    Hargrove, Levi J; Lock, Blair A; Simon, Ann M

    2013-01-01

    Pattern recognition myoelectric control shows great promise as an alternative to conventional amplitude based control to control multiple degree of freedom prosthetic limbs. Many studies have reported pattern recognition classification error performances of less than 10% during offline tests; however, it remains unclear how this translates to real-time control performance. In this contribution, we compare the real-time control performances between pattern recognition and direct myoelectric control (a popular form of conventional amplitude control) for participants who had received targeted muscle reinnervation. The real-time performance was evaluated during three tasks; 1) a box and blocks task, 2) a clothespin relocation task, and 3) a block stacking task. Our results found that pattern recognition significantly outperformed direct control for all three performance tasks. Furthermore, it was found that pattern recognition was configured much quicker. The classification error of the pattern recognition systems used by the patients was found to be 16% ±(1.6%) suggesting that systems with this error rate may still provide excellent control. Finally, patients qualitatively preferred using pattern recognition control and reported the resulting control to be smoother and more consistent. PMID:24110008

  16. A Mozart is not a Pavarotti: singers outperform instrumentalists on foreign accent imitation

    PubMed Central

    Christiner, Markus; Reiterer, Susanne Maria

    2015-01-01

    Recent findings have shown that people with higher musical aptitude were also better in oral language imitation tasks. However, whether singing capacity and instrument playing contribute differently to the imitation of speech has been ignored so far. Research has just recently started to understand that instrumentalists develop quite distinct skills when compared to vocalists. In the same vein the role of the vocal motor system in language acquisition processes has poorly been investigated as most investigations (neurobiological and behavioral) favor to examine speech perception. We set out to test whether the vocal motor system can influence an ability to learn, produce and perceive new languages by contrasting instrumentalists and vocalists. Therefore, we investigated 96 participants, 27 instrumentalists, 33 vocalists and 36 non-musicians/non-singers. They were tested for their abilities to imitate foreign speech: unknown language (Hindi), second language (English) and their musical aptitude. Results revealed that both instrumentalists and vocalists have a higher ability to imitate unintelligible speech and foreign accents than non-musicians/non-singers. Within the musician group, vocalists outperformed instrumentalists significantly. Conclusion: First, adaptive plasticity for speech imitation is not reliant on audition alone but also on vocal-motor induced processes. Second, vocal flexibility of singers goes together with higher speech imitation aptitude. Third, vocal motor training, as of singers, may speed up foreign language acquisition processes. PMID:26379537

  17. A sparse reconstruction algorithm for ultrasonic images in nondestructive testing.

    PubMed

    Guarneri, Giovanni Alfredo; Pipa, Daniel Rodrigues; Neves Junior, Flávio; de Arruda, Lúcia Valéria Ramos; Zibetti, Marcelo Victor Wüst

    2015-01-01

    Ultrasound imaging systems (UIS) are essential tools in nondestructive testing (NDT). In general, the quality of images depends on two factors: system hardware features and image reconstruction algorithms. This paper presents a new image reconstruction algorithm for ultrasonic NDT. The algorithm reconstructs images from A-scan signals acquired by an ultrasonic imaging system with a monostatic transducer in pulse-echo configuration. It is based on regularized least squares using a l1 regularization norm. The method is tested to reconstruct an image of a point-like reflector, using both simulated and real data. The resolution of reconstructed image is compared with four traditional ultrasonic imaging reconstruction algorithms: B-scan, SAFT, ω-k SAFT and regularized least squares (RLS). The method demonstrates significant resolution improvement when compared with B-scan-about 91% using real data. The proposed scheme also outperforms traditional algorithms in terms of signal-to-noise ratio (SNR). PMID:25905700

  18. A Sparse Reconstruction Algorithm for Ultrasonic Images in Nondestructive Testing

    PubMed Central

    Guarneri, Giovanni Alfredo; Pipa, Daniel Rodrigues; Junior, Flávio Neves; de Arruda, Lúcia Valéria Ramos; Zibetti, Marcelo Victor Wüst

    2015-01-01

    Ultrasound imaging systems (UIS) are essential tools in nondestructive testing (NDT). In general, the quality of images depends on two factors: system hardware features and image reconstruction algorithms. This paper presents a new image reconstruction algorithm for ultrasonic NDT. The algorithm reconstructs images from A-scan signals acquired by an ultrasonic imaging system with a monostatic transducer in pulse-echo configuration. It is based on regularized least squares using a l1 regularization norm. The method is tested to reconstruct an image of a point-like reflector, using both simulated and real data. The resolution of reconstructed image is compared with four traditional ultrasonic imaging reconstruction algorithms: B-scan, SAFT, ω-k SAFT and regularized least squares (RLS). The method demonstrates significant resolution improvement when compared with B-scan—about 91% using real data. The proposed scheme also outperforms traditional algorithms in terms of signal-to-noise ratio (SNR). PMID:25905700

  19. Ensemble algorithms in reinforcement learning.

    PubMed

    Wiering, Marco A; van Hasselt, Hado

    2008-08-01

    This paper describes several ensemble methods that combine multiple different reinforcement learning (RL) algorithms in a single agent. The aim is to enhance learning speed and final performance by combining the chosen actions or action probabilities of different RL algorithms. We designed and implemented four different ensemble methods combining the following five different RL algorithms: Q-learning, Sarsa, actor-critic (AC), QV-learning, and AC learning automaton. The intuitively designed ensemble methods, namely, majority voting (MV), rank voting, Boltzmann multiplication (BM), and Boltzmann addition, combine the policies derived from the value functions of the different RL algorithms, in contrast to previous work where ensemble methods have been used in RL for representing and learning a single value function. We show experiments on five maze problems of varying complexity; the first problem is simple, but the other four maze tasks are of a dynamic or partially observable nature. The results indicate that the BM and MV ensembles significantly outperform the single RL algorithms. PMID:18632380

  20. Why Do Chinese-Australian Students Outperform Their Australian Peers in Mathematics: A Comparative Case Study

    ERIC Educational Resources Information Center

    Zhao, Dacheng; Singh, Michael

    2011-01-01

    International comparative studies and cross-cultural studies of mathematics achievement indicate that Chinese students (whether living in or outside China) consistently outperform their Western counterparts. This study shows that the gap between Chinese-Australian and other Australian students is best explained by differences in motivation to…

  1. New dispenser types for integrated pest management of agriculturally significant insect pests: an algorithm with specialized searching capacity in electronic data bases.

    PubMed

    Hummel, H E; Eisinger, M T; Hein, D F; Breuer, M; Schmid, S; Leithold, G

    2012-01-01

    Pheromone effects discovered some 130 years, but scientifically defined just half a century ago, are a great bonus for basic and applied biology. Specifically, pest management efforts have been advanced in many insect orders, either for purposes or monitoring, mass trapping, or for mating disruption. Finding and applying a new search algorithm, nearly 20,000 entries in the pheromone literature have been counted, a number much higher than originally anticipated. This compilation contains identified and thus synthesizable structures for all major orders of insects. Among them are hundreds of agriculturally significant insect pests whose aggregated damages and costly control measures range in the multibillions of dollars annually. Unfortunately, and despite a lot of effort within the international entomological scene, the number of efficient and cheap engineering solutions for dispensing pheromones under variable field conditions is uncomfortably lagging behind. Some innovative approaches are cited from the relevant literature in an attempt to rectify this situation. Recently, specifically designed electrospun organic nanofibers offer a lot of promise. With their use, the mating communication of vineyard insects like Lobesia botrana (Lep.: Tortricidae) can be disrupted for periods of seven weeks. PMID:23885431

  2. Firefly algorithm with chaos

    NASA Astrophysics Data System (ADS)

    Gandomi, A. H.; Yang, X.-S.; Talatahari, S.; Alavi, A. H.

    2013-01-01

    A recently developed metaheuristic optimization algorithm, firefly algorithm (FA), mimics the social behavior of fireflies based on the flashing and attraction characteristics of fireflies. In the present study, we will introduce chaos into FA so as to increase its global search mobility for robust global optimization. Detailed studies are carried out on benchmark problems with different chaotic maps. Here, 12 different chaotic maps are utilized to tune the attractive movement of the fireflies in the algorithm. The results show that some chaotic FAs can clearly outperform the standard FA.

  3. Physiological outperformance at the morphologically-transformed edge of the cyanobacteriosponge Terpios hoshinota (Suberitidae: Hadromerida) when confronting opponent corals.

    PubMed

    Wang, Jih-Terng; Hsu, Chia-Min; Kuo, Chao-Yang; Meng, Pei-Jie; Kao, Shuh-Ji; Chen, Chaolun Allen

    2015-01-01

    Terpios hoshinota, an encrusting cyanosponge, is known as a strong substrate competitor of reef-building corals that kills encountered coral by overgrowth. Terpios outbreaks cause significant declines in living coral cover in Indo-Pacific coral reefs, with the damage usually lasting for decades. Recent studies show that there are morphological transformations at a sponge's growth front when confronting corals. Whether these morphological transformations at coral contacts are involved with physiological outperformance (e.g., higher metabolic activity or nutritional status) over other portions of Terpios remains equivocal. In this study, we compared the indicators of photosynthetic capability and nitrogen status of a sponge-cyanobacteria association at proximal, middle, and distal portions of opponent corals. Terpios tissues in contact with corals displayed significant increases in photosynthetic oxygen production (ca. 61%), the δ13C value (ca. 4%), free proteinogenic amino acid content (ca. 85%), and Gln/Glu ratio (ca. 115%) compared to middle and distal parts of the sponge. In contrast, the maximum quantum yield (Fv/Fm), which is the indicator usually used to represent the integrity of photosystem II, of cyanobacteria photosynthesis was low (0.256~0.319) and showed an inverse trend of higher values in the distal portion of the sponge that might be due to high and variable levels of cyanobacterial phycocyanin. The inconsistent results between photosynthetic oxygen production and Fv/Fm values indicated that maximum quantum yields might not be a suitable indicator to represent the photosynthetic function of the Terpios-cyanobacteria association. Our data conclusively suggest that Terpios hoshinota competes with opponent corals not only by the morphological transformation of the sponge-cyanobacteria association but also by physiological outperformance in accumulating resources for the battle. PMID:26110525

  4. Physiological Outperformance at the Morphologically-Transformed Edge of the Cyanobacteriosponge Terpios hoshinota (Suberitidae: Hadromerida) when Confronting Opponent Corals

    PubMed Central

    Wang, Jih-Terng; Hsu, Chia-Min; Kuo, Chao-Yang; Meng, Pei-Jie; Kao, Shuh-Ji; Chen, Chaolun Allen

    2015-01-01

    Terpios hoshinota, an encrusting cyanosponge, is known as a strong substrate competitor of reef-building corals that kills encountered coral by overgrowth. Terpios outbreaks cause significant declines in living coral cover in Indo-Pacific coral reefs, with the damage usually lasting for decades. Recent studies show that there are morphological transformations at a sponge’s growth front when confronting corals. Whether these morphological transformations at coral contacts are involved with physiological outperformance (e.g., higher metabolic activity or nutritional status) over other portions of Terpios remains equivocal. In this study, we compared the indicators of photosynthetic capability and nitrogen status of a sponge-cyanobacteria association at proximal, middle, and distal portions of opponent corals. Terpios tissues in contact with corals displayed significant increases in photosynthetic oxygen production (ca. 61%), the δ13C value (ca. 4%), free proteinogenic amino acid content (ca. 85%), and Gln/Glu ratio (ca. 115%) compared to middle and distal parts of the sponge. In contrast, the maximum quantum yield (Fv/Fm), which is the indicator usually used to represent the integrity of photosystem II, of cyanobacteria photosynthesis was low (0.256~0.319) and showed an inverse trend of higher values in the distal portion of the sponge that might be due to high and variable levels of cyanobacterial phycocyanin. The inconsistent results between photosynthetic oxygen production and Fv/Fm values indicated that maximum quantum yields might not be a suitable indicator to represent the photosynthetic function of the Terpios-cyanobacteria association. Our data conclusively suggest that Terpios hoshinota competes with opponent corals not only by the morphological transformation of the sponge-cyanobacteria association but also by physiological outperformance in accumulating resources for the battle. PMID:26110525

  5. Advanced GF(32) nonbinary LDPC coded modulation with non-uniform 9-QAM outperforming star 8-QAM.

    PubMed

    Liu, Tao; Lin, Changyu; Djordjevic, Ivan B

    2016-06-27

    In this paper, we first describe a 9-symbol non-uniform signaling scheme based on Huffman code, in which different symbols are transmitted with different probabilities. By using the Huffman procedure, prefix code is designed to approach the optimal performance. Then, we introduce an algorithm to determine the optimal signal constellation sets for our proposed non-uniform scheme with the criterion of maximizing constellation figure of merit (CFM). The proposed nonuniform polarization multiplexed signaling 9-QAM scheme has the same spectral efficiency as the conventional 8-QAM. Additionally, we propose a specially designed GF(32) nonbinary quasi-cyclic LDPC code for the coded modulation system based on the 9-QAM non-uniform scheme. Further, we study the efficiency of our proposed non-uniform 9-QAM, combined with nonbinary LDPC coding, and demonstrate by Monte Carlo simulation that the proposed GF(23) nonbinary LDPC coded 9-QAM scheme outperforms nonbinary LDPC coded uniform 8-QAM by at least 0.8dB. PMID:27410549

  6. Preconditioned Alternating Projection Algorithms for Maximum a Posteriori ECT Reconstruction.

    PubMed

    Krol, Andrzej; Li, Si; Shen, Lixin; Xu, Yuesheng

    2012-11-01

    We propose a preconditioned alternating projection algorithm (PAPA) for solving the maximum a posteriori (MAP) emission computed tomography (ECT) reconstruction problem. Specifically, we formulate the reconstruction problem as a constrained convex optimization problem with the total variation (TV) regularization. We then characterize the solution of the constrained convex optimization problem and show that it satisfies a system of fixed-point equations defined in terms of two proximity operators raised from the convex functions that define the TV-norm and the constrain involved in the problem. The characterization (of the solution) via the proximity operators that define two projection operators naturally leads to an alternating projection algorithm for finding the solution. For efficient numerical computation, we introduce to the alternating projection algorithm a preconditioning matrix (the EM-preconditioner) for the dense system matrix involved in the optimization problem. We prove theoretically convergence of the preconditioned alternating projection algorithm. In numerical experiments, performance of our algorithms, with an appropriately selected preconditioning matrix, is compared with performance of the conventional MAP expectation-maximization (MAP-EM) algorithm with TV regularizer (EM-TV) and that of the recently developed nested EM-TV algorithm for ECT reconstruction. Based on the numerical experiments performed in this work, we observe that the alternating projection algorithm with the EM-preconditioner outperforms significantly the EM-TV in all aspects including the convergence speed, the noise in the reconstructed images and the image quality. It also outperforms the nested EM-TV in the convergence speed while providing comparable image quality. PMID:23271835

  7. Preconditioned Alternating Projection Algorithms for Maximum a Posteriori ECT Reconstruction

    PubMed Central

    Krol, Andrzej; Li, Si; Shen, Lixin; Xu, Yuesheng

    2012-01-01

    We propose a preconditioned alternating projection algorithm (PAPA) for solving the maximum a posteriori (MAP) emission computed tomography (ECT) reconstruction problem. Specifically, we formulate the reconstruction problem as a constrained convex optimization problem with the total variation (TV) regularization. We then characterize the solution of the constrained convex optimization problem and show that it satisfies a system of fixed-point equations defined in terms of two proximity operators raised from the convex functions that define the TV-norm and the constrain involved in the problem. The characterization (of the solution) via the proximity operators that define two projection operators naturally leads to an alternating projection algorithm for finding the solution. For efficient numerical computation, we introduce to the alternating projection algorithm a preconditioning matrix (the EM-preconditioner) for the dense system matrix involved in the optimization problem. We prove theoretically convergence of the preconditioned alternating projection algorithm. In numerical experiments, performance of our algorithms, with an appropriately selected preconditioning matrix, is compared with performance of the conventional MAP expectation-maximization (MAP-EM) algorithm with TV regularizer (EM-TV) and that of the recently developed nested EM-TV algorithm for ECT reconstruction. Based on the numerical experiments performed in this work, we observe that the alternating projection algorithm with the EM-preconditioner outperforms significantly the EM-TV in all aspects including the convergence speed, the noise in the reconstructed images and the image quality. It also outperforms the nested EM-TV in the convergence speed while providing comparable image quality. PMID:23271835

  8. Study of genetic direct search algorithms for function optimization

    NASA Technical Reports Server (NTRS)

    Zeigler, B. P.

    1974-01-01

    The results are presented of a study to determine the performance of genetic direct search algorithms in solving function optimization problems arising in the optimal and adaptive control areas. The findings indicate that: (1) genetic algorithms can outperform standard algorithms in multimodal and/or noisy optimization situations, but suffer from lack of gradient exploitation facilities when gradient information can be utilized to guide the search. (2) For large populations, or low dimensional function spaces, mutation is a sufficient operator. However for small populations or high dimensional functions, crossover applied in about equal frequency with mutation is an optimum combination. (3) Complexity, in terms of storage space and running time, is significantly increased when population size is increased or the inversion operator, or the second level adaptation routine is added to the basic structure.

  9. Trait responses of invasive aquatic macrophyte congeners: colonizing diploid outperforms polyploid

    PubMed Central

    Grewell, Brenda J.; Skaer Thomason, Meghan J.; Futrell, Caryn J.; Iannucci, Maria; Drenovsky, Rebecca E.

    2016-01-01

    Understanding traits underlying colonization and niche breadth of invasive plants is key to developing sustainable management solutions to curtail invasions at the establishment phase, when efforts are often most effective. The aim of this study was to evaluate how two invasive congeners differing in ploidy respond to high and lowresource availability following establishment from asexual fragments. Because polyploids are expected to have wider niche breadths than diploid ancestors, we predicted that a decaploid species would have superior ability to maximize resource uptake and use, and outperform a diploid congener when colonizing environments with contrasting light and nutrient availability. A mesocosm experiment was designed to test the main and interactive effects of ploidy (diploid and decaploid) and soil nutrient availability (low and high) nested within light environments (shade and sun) of two invasive aquatic plant congeners. Counter to our predictions, the diploid congener outperformed the decaploid in the early stage of growth. Although growth was similar and low in the cytotypes at low nutrient availability, the diploid species had much higher growth rate and biomass accumulation than the polyploid with nutrient enrichment, irrespective of light environment. Our results also revealed extreme differences in time to anthesis between the cytotypes. The rapid growth and earlier flowering of the diploid congener relative to the decaploid congener represent alternate strategies for establishment and success. PMID:26921139

  10. Being outperformed in an intergroup context: the relationship between group status and self-protective strategies.

    PubMed

    Redersdorff, Sandrine; Martinot, Delphine

    2009-06-01

    The present study examines the effects of group status on self-esteem when individuals are outperformed by an in-group target (Experiments 1 and 2) or an out-group (Experiment 2). The main aim was to examine different self-protective mechanisms when the current standing of the in-group vis-à-vis another group is either unfavourable (low status) or favourable (high status). Experiment 1 showed that when outperformed by an in-group target, the members of a low status group reported higher self-esteem than members of a high status group. Moreover, this effect was mediated by group identification. Experiment 2 replicated the previous results and gave rise to similar effects on investment in the group. The perceived relevance of the comparison group appeared to protect the self-esteem of high status group members. This research demonstrates the mediating role of self-protection mechanisms such as group identification and the perceived relevance of a comparison group. PMID:18922208

  11. Trait responses of invasive aquatic macrophyte congeners: colonizing diploid outperforms polyploid.

    PubMed

    Grewell, Brenda J; Skaer Thomason, Meghan J; Futrell, Caryn J; Iannucci, Maria; Drenovsky, Rebecca E

    2016-01-01

    Understanding traits underlying colonization and niche breadth of invasive plants is key to developing sustainable management solutions to curtail invasions at the establishment phase, when efforts are often most effective. The aim of this study was to evaluate how two invasive congeners differing in ploidy respond to high and lowresource availability following establishment from asexual fragments. Because polyploids are expected to have wider niche breadths than diploid ancestors, we predicted that a decaploid species would have superior ability to maximize resource uptake and use, and outperform a diploid congener when colonizing environments with contrasting light and nutrient availability. A mesocosm experiment was designed to test the main and interactive effects of ploidy (diploid and decaploid) and soil nutrient availability (low and high) nested within light environments (shade and sun) of two invasive aquatic plant congeners. Counter to our predictions, the diploid congener outperformed the decaploid in the early stage of growth. Although growth was similar and low in the cytotypes at low nutrient availability, the diploid species had much higher growth rate and biomass accumulation than the polyploid with nutrient enrichment, irrespective of light environment. Our results also revealed extreme differences in time to anthesis between the cytotypes. The rapid growth and earlier flowering of the diploid congener relative to the decaploid congener represent alternate strategies for establishment and success. PMID:26921139

  12. Kidney Injury Molecule-1 Outperforms Traditional Biomarkers of Kidney Injury in Multi-site Preclinical Biomarker Qualification Studies

    PubMed Central

    Vaidya, Vishal S.; Ozer, Josef S.; Frank, Dieterle; Collings, Fitz B.; Ramirez, Victoria; Troth, Sean; Muniappa, Nagaraja; Thudium, Douglas; Gerhold, David; Holder, Daniel J.; Bobadilla, Norma A.; Marrer, Estelle; Perentes, Elias; Cordier, André; Vonderscher, Jacky; Maurer, Gérard; Goering, Peter L.; Sistare, Frank D.; Bonventre, Joseph V.

    2010-01-01

    Kidney toxicity accounts for a significant percentage of morbidity and drug candidate failure. Serum creatinine (SCr) and blood urea nitrogen (BUN) have been used to monitor kidney dysfunction for over a century but these markers are insensitive and non-specific. In multi-site preclinical rat toxicology studies the diagnostic performance of urinary kidney injury molecule-1 (Kim-1) was compared to traditional biomarkers as predictors of kidney tubular histopathologic changes, currently considered the “gold standard” of nephrotoxicity. In multiple models of kidney injury, urinary Kim-1 significantly outperformed SCr and BUN. The area under the receiver operating characteristic curve for Kim-1 was between 0.91 and 0.99 as compared to 0.79 to 0.9 for BUN and 0.73 to 0.85 for SCr. Thus urinary Kim-1 is the first injury biomarker of kidney toxicity qualified by the FDA and EMEA and is expected to significantly improve kidney safety monitoring. PMID:20458318

  13. Advanced time integration algorithms for dislocation dynamics simulations of work hardening

    DOE PAGESBeta

    Sills, Ryan B.; Aghaei, Amin; Cai, Wei

    2016-04-25

    Efficient time integration is a necessity for dislocation dynamics simulations of work hardening to achieve experimentally relevant strains. In this work, an efficient time integration scheme using a high order explicit method with time step subcycling and a newly-developed collision detection algorithm are evaluated. First, time integrator performance is examined for an annihilating Frank–Read source, showing the effects of dislocation line collision. The integrator with subcycling is found to significantly out-perform other integration schemes. The performance of the time integration and collision detection algorithms is then tested in a work hardening simulation. The new algorithms show a 100-fold speed-up relativemore » to traditional schemes. As a result, subcycling is shown to improve efficiency significantly while maintaining an accurate solution, and the new collision algorithm allows an arbitrarily large time step size without missing collisions.« less

  14. Advanced time integration algorithms for dislocation dynamics simulations of work hardening

    NASA Astrophysics Data System (ADS)

    Sills, Ryan B.; Aghaei, Amin; Cai, Wei

    2016-05-01

    Efficient time integration is a necessity for dislocation dynamics simulations of work hardening to achieve experimentally relevant strains. In this work, an efficient time integration scheme using a high order explicit method with time step subcycling and a newly-developed collision detection algorithm are evaluated. First, time integrator performance is examined for an annihilating Frank–Read source, showing the effects of dislocation line collision. The integrator with subcycling is found to significantly out-perform other integration schemes. The performance of the time integration and collision detection algorithms is then tested in a work hardening simulation. The new algorithms show a 100-fold speed-up relative to traditional schemes. Subcycling is shown to improve efficiency significantly while maintaining an accurate solution, and the new collision algorithm allows an arbitrarily large time step size without missing collisions.

  15. Low-Friction Minilaparoscopy Outperforms Regular 5-mm and 3-mm Instruments for Precise Tasks

    PubMed Central

    Firme, Wood A.; Lima, Diego L.; de Paula Lopes, Vladmir Goldstein; Montandon, Isabelle D.; Filho, Flavio Santos; Shadduck, Phillip P.

    2015-01-01

    Background and Objectives: Therapeutic laparoscopy was incorporated into surgical practice more than 25 y ago. Several modifications have since been developed to further minimize surgical trauma and improve results. Minilaparoscopy, performed with 2- to 3-mm instruments was introduced in the mid 1990s but failed to attain mainstream use, mostly because of the limitations of the early devices. Buoyed by a renewed interest, new generations of mini instruments are being developed with improved functionality and durability. This study is an objective evaluation of a new set of mini instruments with a novel low-friction design. Method: Twenty-two medical students and 22 surgical residents served as study participants. Three designs of laparoscopic instruments were evaluated: conventional 5 mm, traditional 3 mm, and low-friction 3 mm. The instruments were evaluated with a standard surgical simulator, emulating 4 exercises of various complexities, testing grasping, precise 2-handed movements, and suturing. The metric measured was time to task completion, with 5 replicates for every combination of instrument–exercise–participant. Results: For all 4 tasks, the instrument design that performed the best was the same in both the medical student and surgical resident groups. For the gross-grasping task, the 5-mm conventional instruments performed best, followed by the low-friction mini instruments. For the 3 more complex and precise tasks, the low-friction mini instruments outperformed both of the other instrument designs. Conclusion: In standard surgical simulator exercises, low-friction minilaparoscopic instruments outperformed both conventional 3- and 5-mm laparoscopic instruments for precise tasks. PMID:26390530

  16. Algorithms and Algorithmic Languages.

    ERIC Educational Resources Information Center

    Veselov, V. M.; Koprov, V. M.

    This paper is intended as an introduction to a number of problems connected with the description of algorithms and algorithmic languages, particularly the syntaxes and semantics of algorithmic languages. The terms "letter, word, alphabet" are defined and described. The concept of the algorithm is defined and the relation between the algorithm and…

  17. Bayesian Markov models consistently outperform PWMs at predicting motifs in nucleotide sequences.

    PubMed

    Siebert, Matthias; Söding, Johannes

    2016-07-27

    Position weight matrices (PWMs) are the standard model for DNA and RNA regulatory motifs. In PWMs nucleotide probabilities are independent of nucleotides at other positions. Models that account for dependencies need many parameters and are prone to overfitting. We have developed a Bayesian approach for motif discovery using Markov models in which conditional probabilities of order k - 1 act as priors for those of order k This Bayesian Markov model (BaMM) training automatically adapts model complexity to the amount of available data. We also derive an EM algorithm for de-novo discovery of enriched motifs. For transcription factor binding, BaMMs achieve significantly (P    =  1/16) higher cross-validated partial AUC than PWMs in 97% of 446 ChIP-seq ENCODE datasets and improve performance by 36% on average. BaMMs also learn complex multipartite motifs, improving predictions of transcription start sites, polyadenylation sites, bacterial pause sites, and RNA binding sites by 26-101%. BaMMs never performed worse than PWMs. These robust improvements argue in favour of generally replacing PWMs by BaMMs. PMID:27288444

  18. A graph spectrum based geometric biclustering algorithm.

    PubMed

    Wang, Doris Z; Yan, Hong

    2013-01-21

    Biclustering is capable of performing simultaneous clustering on two dimensions of a data matrix and has many applications in pattern classification. For example, in microarray experiments, a subset of genes is co-expressed in a subset of conditions, and biclustering algorithms can be used to detect the coherent patterns in the data for further analysis of function. In this paper, we present a graph spectrum based geometric biclustering (GSGBC) algorithm. In the geometrical view, biclusters can be seen as different linear geometrical patterns in high dimensional spaces. Based on this, the modified Hough transform is used to find the Hough vector (HV) corresponding to sub-bicluster patterns in 2D spaces. A graph can be built regarding each HV as a node. The graph spectrum is utilized to identify the eigengroups in which the sub-biclusters are grouped naturally to produce larger biclusters. Through a comparative study, we find that the GSGBC achieves as good a result as GBC and outperforms other kinds of biclustering algorithms. Also, compared with the original geometrical biclustering algorithm, it reduces the computing time complexity significantly. We also show that biologically meaningful biclusters can be identified by our method from real microarray gene expression data. PMID:23079285

  19. Pathway-Dependent Effectiveness of Network Algorithms for Gene Prioritization

    PubMed Central

    Shim, Jung Eun; Hwang, Sohyun; Lee, Insuk

    2015-01-01

    A network-based approach has proven useful for the identification of novel genes associated with complex phenotypes, including human diseases. Because network-based gene prioritization algorithms are based on propagating information of known phenotype-associated genes through networks, the pathway structure of each phenotype might significantly affect the effectiveness of algorithms. We systematically compared two popular network algorithms with distinct mechanisms – direct neighborhood which propagates information to only direct network neighbors, and network diffusion which diffuses information throughout the entire network – in prioritization of genes for worm and human phenotypes. Previous studies reported that network diffusion generally outperforms direct neighborhood for human diseases. Although prioritization power is generally measured for all ranked genes, only the top candidates are significant for subsequent functional analysis. We found that high prioritizing power of a network algorithm for all genes cannot guarantee successful prioritization of top ranked candidates for a given phenotype. Indeed, the majority of the phenotypes that were more efficiently prioritized by network diffusion showed higher prioritizing power for top candidates by direct neighborhood. We also found that connectivity among pathway genes for each phenotype largely determines which network algorithm is more effective, suggesting that the network algorithm used for each phenotype should be chosen with consideration of pathway gene connectivity. PMID:26091506

  20. Site-specific in situ growth of an interferon-polymer conjugate that outperforms PEGASYS in cancer therapy.

    PubMed

    Hu, Jin; Wang, Guilin; Zhao, Wenguo; Liu, Xinyu; Zhang, Libin; Gao, Weiping

    2016-07-01

    Conjugating poly(ethylene glycol) (PEG), PEGylation, to therapeutic proteins is widely used as a means to improve their pharmacokinetics and therapeutic potential. One prime example is PEGylated interferon-alpha (PEGASYS). However, PEGylation usually leads to a heterogeneous mixture of positional isomers with reduced bioactivity and low yield. Herein, we report site-specific in situ growth (SIG) of a PEG-like polymer, poly(oligo(ethylene glycol) methyl ether methacrylate) (POEGMA), from the C-terminus of interferon-alpha to form a site-specific (C-terminal) and stoichiometric (1:1) POEGMA conjugate of interferon-alpha in high yield. The POEGMA conjugate showed significantly improved pharmacokinetics, tumor accumulation and anticancer efficacy as compared to interferon-alpha. Notably, the POEGMA conjugate possessed a 7.2-fold higher in vitro antiproliferative bioactivity than PEGASYS. More importantly, in a murine cancer model, the POEGMA conjugate completely inhibited tumor growth and eradicated tumors of 75% mice without appreciable systemic toxicity, whereas at the same dose, no mice treated with PEGASYS survived for over 58 days. The outperformance of a site-specific POEGMA conjugate prepared by SIG over PEGASYS that is the current gold standard for interferon-alpha delivery suggests that SIG is of interest for the development of next-generation protein therapeutics. PMID:27152679

  1. Multisensor data fusion algorithm development

    SciTech Connect

    Yocky, D.A.; Chadwick, M.D.; Goudy, S.P.; Johnson, D.K.

    1995-12-01

    This report presents a two-year LDRD research effort into multisensor data fusion. We approached the problem by addressing the available types of data, preprocessing that data, and developing fusion algorithms using that data. The report reflects these three distinct areas. First, the possible data sets for fusion are identified. Second, automated registration techniques for imagery data are analyzed. Third, two fusion techniques are presented. The first fusion algorithm is based on the two-dimensional discrete wavelet transform. Using test images, the wavelet algorithm is compared against intensity modulation and intensity-hue-saturation image fusion algorithms that are available in commercial software. The wavelet approach outperforms the other two fusion techniques by preserving spectral/spatial information more precisely. The wavelet fusion algorithm was also applied to Landsat Thematic Mapper and SPOT panchromatic imagery data. The second algorithm is based on a linear-regression technique. We analyzed the technique using the same Landsat and SPOT data.

  2. A paclitaxel-loaded recombinant polypeptide nanoparticle outperforms Abraxane in multiple murine cancer models

    NASA Astrophysics Data System (ADS)

    Bhattacharyya, Jayanta; Bellucci, Joseph J.; Weitzhandler, Isaac; McDaniel, Jonathan R.; Spasojevic, Ivan; Li, Xinghai; Lin, Chao-Chieh; Chi, Jen-Tsan Ashley; Chilkoti, Ashutosh

    2015-08-01

    Packaging clinically relevant hydrophobic drugs into a self-assembled nanoparticle can improve their aqueous solubility, plasma half-life, tumour-specific uptake and therapeutic potential. To this end, here we conjugated paclitaxel (PTX) to recombinant chimeric polypeptides (CPs) that spontaneously self-assemble into ~60 nm near-monodisperse nanoparticles that increased the systemic exposure of PTX by sevenfold compared with free drug and twofold compared with the Food and Drug Administration-approved taxane nanoformulation (Abraxane). The tumour uptake of the CP-PTX nanoparticle was fivefold greater than free drug and twofold greater than Abraxane. In a murine cancer model of human triple-negative breast cancer and prostate cancer, CP-PTX induced near-complete tumour regression after a single dose in both tumour models, whereas at the same dose, no mice treated with Abraxane survived for >80 days (breast) and 60 days (prostate), respectively. These results show that a molecularly engineered nanoparticle with precisely engineered design features outperforms Abraxane, the current gold standard for PTX delivery.

  3. Collective Intelligence Meets Medical Decision-Making: The Collective Outperforms the Best Radiologist

    PubMed Central

    Wolf, Max; Krause, Jens; Carney, Patricia A.; Bogart, Andy; Kurvers, Ralf H. J. M.

    2015-01-01

    While collective intelligence (CI) is a powerful approach to increase decision accuracy, few attempts have been made to unlock its potential in medical decision-making. Here we investigated the performance of three well-known collective intelligence rules (“majority”, “quorum”, and “weighted quorum”) when applied to mammography screening. For any particular mammogram, these rules aggregate the independent assessments of multiple radiologists into a single decision (recall the patient for additional workup or not). We found that, compared to single radiologists, any of these CI-rules both increases true positives (i.e., recalls of patients with cancer) and decreases false positives (i.e., recalls of patients without cancer), thereby overcoming one of the fundamental limitations to decision accuracy that individual radiologists face. Importantly, we find that all CI-rules systematically outperform even the best-performing individual radiologist in the respective group. Our findings demonstrate that CI can be employed to improve mammography screening; similarly, CI may have the potential to improve medical decision-making in a much wider range of contexts, including many areas of diagnostic imaging and, more generally, diagnostic decisions that are based on the subjective interpretation of evidence. PMID:26267331

  4. Delignification outperforms alkaline extraction for xylan fingerprinting of oil palm empty fruit bunch.

    PubMed

    Murciano Martínez, Patricia; Kabel, Mirjam A; Gruppen, Harry

    2016-11-20

    Enzyme hydrolysed (hemi-)celluloses from oil palm empty fruit bunches (EFBs) are a source for production of bio-fuels or chemicals. In this study, after either peracetic acid delignification or alkaline extraction, EFB hemicellulose structures were described, aided by xylanase hydrolysis. Delignification of EFB facilitated the hydrolysis of EFB-xylan by a pure endo-β-1,4-xylanase. Up to 91% (w/w) of the non-extracted xylan in the delignified EFB was hydrolysed compared to less than 4% (w/w) of that in untreated EFB. Alkaline extraction of EFB, without prior delignification, yielded only 50% of the xylan. The xylan obtained was hydrolysed only for 40% by the endo-xylanase used. Hence, delignification alone outperformed alkaline extraction as pretreatment for enzymatic fingerprinting of EFB xylans. From the analysis of the oligosaccharide-fingerprint of the delignified endo-xylanase hydrolysed EFB xylan, the structure was proposed as acetylated 4-O-methylglucuronoarabinoxylan. PMID:27561506

  5. A Paclitaxel-Loaded Recombinant Polypeptide Nanoparticle Outperforms Abraxane in Multiple Murine Cancer Models

    PubMed Central

    Bhattacharyya, Jayanta; Bellucci, Joseph J.; Weitzhandler, Isaac; McDaniel, Jonathan R.; Spasojevic, Ivan; Li, Xinghai; Lin, Chao-Chieh; Chi, Jen-Tsan Ashley; Chilkoti, Ashutosh

    2015-01-01

    Packaging clinically relevant hydrophobic drugs into a self-assembled nanoparticle can improve their aqueous solubility, plasma half-life, tumor specific uptake and therapeutic potential. To this end, here we conjugated paclitaxel (PTX) to recombinant chimeric polypeptides (CPs) that spontaneously self-assemble into ~60-nm diameter near-monodisperse nanoparticles that increased the systemic exposure of PTX by 7-fold compared to free drug and 2-fold compared to the FDA approved taxane nanoformulation (Abraxane®). The tumor uptake of the CP-PTX nanoparticle was 5-fold greater than free drug and 2-fold greater than Abraxane. In a murine cancer model of human triple negative breast cancer and prostate cancer, CP-PTX induced near complete tumor regression after a single dose in both tumor models, whereas at the same dose, no mice treated with Abraxane survived for more than 80 days (breast) and 60 days (prostate) respectively. These results show that a molecularly engineered nanoparticle with precisely engineered design features outperforms Abraxane, the current gold standard for paclitaxel delivery. PMID:26239362

  6. Gender differences in primary and secondary education: Are girls really outperforming boys?

    NASA Astrophysics Data System (ADS)

    Driessen, Geert; van Langen, Annemarie

    2013-06-01

    A moral panic has broken out in several countries after recent studies showed that girls were outperforming boys in education. Commissioned by the Dutch Ministry of Education, the present study examines the position of boys and girls in Dutch primary education and in the first phase of secondary education over the past ten to fifteen years. On the basis of several national and international large-scale databases, the authors examined whether one can indeed speak of a gender gap, at the expense of boys. Three domains were investigated, namely cognitive competencies, non-cognitive competencies, and school career features. The results as expressed in effect sizes show that there are hardly any differences with regard to language and mathematics proficiency. However, the position of boys in terms of educational level and attitudes and behaviour is much more unfavourable than that of girls. Girls, on the other hand, score more unfavourably with regard to sector and subject choice. While the present situation in general does not differ very much from that of a decade ago, it is difficult to predict in what way the balances might shift in the years to come.

  7. Plants adapted to warmer climate do not outperform regional plants during a natural heat wave.

    PubMed

    Bucharova, Anna; Durka, Walter; Hermann, Julia-Maria; Hölzel, Norbert; Michalski, Stefan; Kollmann, Johannes; Bossdorf, Oliver

    2016-06-01

    With ongoing climate change, many plant species may not be able to adapt rapidly enough, and some conservation experts are therefore considering to translocate warm-adapted ecotypes to mitigate effects of climate warming. Although this strategy, called assisted migration, is intuitively plausible, most of the support comes from models, whereas experimental evidence is so far scarce. Here we present data on multiple ecotypes of six grassland species, which we grew in four common gardens in Germany during a natural heat wave, with temperatures 1.4-2.0°C higher than the long-term means. In each garden we compared the performance of regional ecotypes with plants from a locality with long-term summer temperatures similar to what the plants experienced during the summer heat wave. We found no difference in performance between regional and warm-adapted plants in four of the six species. In two species, regional ecotypes even outperformed warm-adapted plants, despite elevated temperatures, which suggests that translocating warm-adapted ecotypes may not only lack the desired effect of increased performance but may even have negative consequences. Even if adaptation to climate plays a role, other factors involved in local adaptation, such as biotic interactions, may override it. Based on our results, we cannot advocate assisted migration as a universal tool to enhance the performance of local plant populations and communities during climate change. PMID:27516871

  8. Layered carbon nanotube-polyelectrolyte electrodes outperform traditional neural interface materials.

    PubMed

    Jan, Edward; Hendricks, Jeffrey L; Husaini, Vincent; Richardson-Burns, Sarah M; Sereno, Andrew; Martin, David C; Kotov, Nicholas A

    2009-12-01

    The safety, function, and longevity of implantable neuroprosthetic and cardiostimulating electrodes depend heavily on the electrical properties of the electrode-tissue interface, which in many cases requires substantial improvement. While different variations of carbon nanotube materials have been shown to be suitable for neural excitation, it is critical to evaluate them versus other materials used for bioelectrical interfacing, which have not been done in any study performed so far despite strong interest to this area. In this study, we carried out this evaluation and found that composite multiwalled carbon nanotube-polyelectrolyte (MWNT-PE) multilayer electrodes substantially outperform in one way or the other state-of-the-art neural interface materials available today, namely activated electrochemically deposited iridium oxide (IrOx) and poly(3,4-ethylenedioxythiophene) (PEDOT). Our findings provide the concrete experimental proof to the much discussed possibility that carbon nanotube composites can serve as excellent new material for neural interfacing with a strong possibility to lead to a new generation of implantable electrodes. PMID:19785391

  9. Collective intelligence meets medical decision-making: the collective outperforms the best radiologist.

    PubMed

    Wolf, Max; Krause, Jens; Carney, Patricia A; Bogart, Andy; Kurvers, Ralf H J M

    2015-01-01

    While collective intelligence (CI) is a powerful approach to increase decision accuracy, few attempts have been made to unlock its potential in medical decision-making. Here we investigated the performance of three well-known collective intelligence rules ("majority", "quorum", and "weighted quorum") when applied to mammography screening. For any particular mammogram, these rules aggregate the independent assessments of multiple radiologists into a single decision (recall the patient for additional workup or not). We found that, compared to single radiologists, any of these CI-rules both increases true positives (i.e., recalls of patients with cancer) and decreases false positives (i.e., recalls of patients without cancer), thereby overcoming one of the fundamental limitations to decision accuracy that individual radiologists face. Importantly, we find that all CI-rules systematically outperform even the best-performing individual radiologist in the respective group. Our findings demonstrate that CI can be employed to improve mammography screening; similarly, CI may have the potential to improve medical decision-making in a much wider range of contexts, including many areas of diagnostic imaging and, more generally, diagnostic decisions that are based on the subjective interpretation of evidence. PMID:26267331

  10. Evaluating super resolution algorithms

    NASA Astrophysics Data System (ADS)

    Kim, Youn Jin; Park, Jong Hyun; Shin, Gun Shik; Lee, Hyun-Seung; Kim, Dong-Hyun; Park, Se Hyeok; Kim, Jaehyun

    2011-01-01

    This study intends to establish a sound testing and evaluation methodology based upon the human visual characteristics for appreciating the image restoration accuracy; in addition to comparing the subjective results with predictions by some objective evaluation methods. In total, six different super resolution (SR) algorithms - such as iterative back-projection (IBP), robust SR, maximum a posteriori (MAP), projections onto convex sets (POCS), a non-uniform interpolation, and frequency domain approach - were selected. The performance comparison between the SR algorithms in terms of their restoration accuracy was carried out through both subjectively and objectively. The former methodology relies upon the paired comparison method that involves the simultaneous scaling of two stimuli with respect to image restoration accuracy. For the latter, both conventional image quality metrics and color difference methods are implemented. Consequently, POCS and a non-uniform interpolation outperformed the others for an ideal situation, while restoration based methods appear more accurate to the HR image in a real world case where any prior information about the blur kernel is remained unknown. However, the noise-added-image could not be restored successfully by any of those methods. The latest International Commission on Illumination (CIE) standard color difference equation CIEDE2000 was found to predict the subjective results accurately and outperformed conventional methods for evaluating the restoration accuracy of those SR algorithms.

  11. Lianas always outperform tree seedlings regardless of soil nutrients: results from a long-term fertilization experiment.

    PubMed

    Pasquini, Sarah C; Wright, S Joseph; Santiago, Louis S

    2015-07-01

    Lianas are a prominent growth form in tropical forests, and there is compelling evidence that they are increasing in abundance throughout the Neotropics. While recent evidence shows that soil resources limit tree growth even in deep shade, the degree to which soil resources limit lianas in forest understories, where they coexist with trees for decades, remains unknown. Regardless, the physiological underpinnings of soil resource limitation in deeply shaded tropical habitats remain largely unexplored for either trees or lianas. Theory predicts that lianas should be more limited by soil resources than trees because they occupy the quick-return end of the "leaf economic spectrum," characterized by high rates of photosynthesis, high specific leaf area, short leaf life span, affinity to high-nutrient sites, and greater foliar nutrient concentrations. To address these issues, we asked whether soil resources (nitrogen, phosphorus, and potassium), alone or in combination, applied experimentally for more than a decade would cause significant changes in the morphology or physiology of tree and liana seedlings in a lowland tropical forest. We found evidence for the first time that phosphorus limits the photosynthetic performance of both trees and lianas in deeply shaded understory habitats. More importantly, lianas always showed significantly greater photosynthetic capacity, quenching, and saturating light levels compared to trees across all treatments. We found little evidence for nutrient x growth form interactions, indicating that lianas were not disproportionately favored in nutrient-rich habitats. Tree and liana seedlings differed markedly for six key morphological traits, demonstrating that architectural differences occurred very early in ontogeny prior to lianas finding a trellis (all seedlings were self-supporting). Overall, our results do not support nutrient loading as a mechanism of increasing liana abundance in the Neotropics. Rather, our finding that lianas

  12. Genetic Algorithms: A New Method for Neutron Beam Spectral Characterization

    SciTech Connect

    David W. Freeman

    2000-06-04

    A revolutionary new concept for solving the neutron spectrum unfolding problem using genetic algorithms (GAs) has recently been introduced. GAs are part of a new field of evolutionary solution techniques that mimic living systems with computer-simulated chromosome solutions that mate, mutate, and evolve to create improved solutions. The original motivation for the research was to improve spectral characterization of neutron beams associated with boron neutron capture therapy (BNCT). The GA unfolding technique has been successfully applied to problems with moderate energy resolution (up to 47 energy groups). Initial research indicates that the GA unfolding technique may well be superior to popular unfolding methods in common use. Research now under way at Kansas State University is focused on optimizing the unfolding algorithm and expanding its energy resolution to unfold detailed beam spectra based on multiple foil measurements. Indications are that the final code will significantly outperform current, state-of-the-art codes in use by the scientific community.

  13. A hierarchical algorithm for molecular similarity (H-FORMS).

    PubMed

    Ramirez-Manzanares, Alonso; Peña, Joaquin; Azpiroz, Jon M; Merino, Gabriel

    2015-07-15

    A new hierarchical method to determine molecular similarity is introduced. The goal of this method is to detect if a pair of molecules has the same structure by estimating a rigid transformation that aligns the molecules and a correspondence function that matches their atoms. The algorithm firstly detect similarity based on the global spatial structure. If this analysis is not sufficient, the algorithm computes novel local structural rotation-invariant descriptors for the atom neighborhood and uses this information to match atoms. Two strategies (deterministic and stochastic) on the matching based alignment computation are tested. As a result, the atom-matching based on local similarity indexes decreases the number of testing trials and significantly reduces the dimensionality of the Hungarian assignation problem. The experiments on well-known datasets show that our proposal outperforms state-of-the-art methods in terms of the required computational time and accuracy. PMID:26037060

  14. Semi-flocking algorithm for motion control of mobile sensors in large-scale surveillance systems.

    PubMed

    Semnani, Samaneh Hosseini; Basir, Otman A

    2015-01-01

    The ability of sensors to self-organize is an important asset in surveillance sensor networks. Self-organize implies self-control at the sensor level and coordination at the network level. Biologically inspired approaches have recently gained significant attention as a tool to address the issue of sensor control and coordination in sensor networks. These approaches are exemplified by the two well-known algorithms, namely, the Flocking algorithm and the Anti-Flocking algorithm. Generally speaking, although these two biologically inspired algorithms have demonstrated promising performance, they expose deficiencies when it comes to their ability to maintain simultaneous robust dynamic area coverage and target coverage. These two coverage performance objectives are inherently conflicting. This paper presents Semi-Flocking, a biologically inspired algorithm that benefits from key characteristics of both the Flocking and Anti-Flocking algorithms. The Semi-Flocking algorithm approaches the problem by assigning a small flock of sensors to each target, while at the same time leaving some sensors free to explore the environment. This allows the algorithm to strike balance between robust area coverage and target coverage. Such balance is facilitated via flock-sensor coordination. The performance of the proposed Semi-Flocking algorithm is examined and compared with other two flocking-based algorithms once using randomly moving targets and once using a standard walking pedestrian dataset. The results of both experiments show that the Semi-Flocking algorithm outperforms both the Flocking algorithm and the Anti-Flocking algorithm with respect to the area of coverage and the target coverage objectives. Furthermore, the results show that the proposed algorithm demonstrates shorter target detection time and fewer undetected targets than the other two flocking-based algorithms. PMID:25014985

  15. Dynamic classification using case-specific training cohorts outperforms static gene expression signatures in breast cancer

    PubMed Central

    Győrffy, Balázs; Karn, Thomas; Sztupinszki, Zsófia; Weltz, Boglárka; Müller, Volkmar; Pusztai, Lajos

    2015-01-01

    The molecular diversity of breast cancer makes it impossible to identify prognostic markers that are applicable to all breast cancers. To overcome limitations of previous multigene prognostic classifiers, we propose a new dynamic predictor: instead of using a single universal training cohort and an identical list of informative genes to predict the prognosis of new cases, a case-specific predictor is developed for each test case. Gene expression data from 3,534 breast cancers with clinical annotation including relapse-free survival is analyzed. For each test case, we select a case-specific training subset including only molecularly similar cases and a case-specific predictor is generated. This method yields different training sets and different predictors for each new patient. The model performance was assessed in leave-one-out validation and also in 325 independent cases. Prognostic discrimination was high for all cases (n = 3,534, HR = 3.68, p = 1.67 E−56). The dynamic predictor showed higher overall accuracy (0.68) than genomic surrogates for Oncotype DX (0.64), Genomic Grade Index (0.61) or MammaPrint (0.47). The dynamic predictor was also effective in triple-negative cancers (n = 427, HR = 3.08, p = 0.0093) where the above classifiers all failed. Validation in independent patients yielded similar classification power (HR = 3.57). The dynamic classifier is available online at http://www.recurrenceonline.com/?q=Re_training. In summary, we developed a new method to make personalized prognostic prediction using case-specific training cohorts. The dynamic predictors outperform static models developed from single historical training cohorts and they also predict well in triple-negative cancers. PMID:25274406

  16. Do Cultivated Varieties of Native Plants Have the Ability to Outperform Their Wild Relatives?

    PubMed Central

    Schröder, Roland; Prasse, Rüdiger

    2013-01-01

    Vast amounts of cultivars of native plants are annually introduced into the semi-natural range of their wild relatives for re-vegetation and restoration. As cultivars are often selected towards enhanced biomass production and might transfer these traits into wild relatives by hybridization, it is suggested that cultivars and the wild × cultivar hybrids are competitively superior to their wild relatives. The release of such varieties may therefore result in unintended changes in native vegetation. In this study we examined for two species frequently used in re-vegetation (Plantago lanceolata and Lotus corniculatus) whether cultivars and artificially generated intra-specific wild × cultivar hybrids may produce a higher vegetative and generative biomass than their wilds. For that purpose a competition experiment was conducted for two growing seasons in a common garden. Every plant type was growing (a.) alone, (b.) in pairwise combination with a similar plant type and (c.) in pairwise interaction with a different plant type. When competing with wilds cultivars of both species showed larger biomass production than their wilds in the first year only and hybrids showed larger biomass production than their wild relatives in both study years. As biomass production is an important factor determining fitness and competitive ability, we conclude that cultivars and hybrids are competitively superior their wild relatives. However, cultivars of both species experienced large fitness reductions (nearly complete mortality in L. corniculatus) due to local climatic conditions. We conclude that cultivars are good competitors only as long as they are not subjected to stressful environmental factors. As hybrids seemed to inherit both the ability to cope with the local climatic conditions from their wild parents as well as the enhanced competitive strength from their cultivars, we regard them as strong competitors and assume that they are able to outperform their wilds at least over

  17. Managed Bumblebees Outperform Honeybees in Increasing Peach Fruit Set in China: Different Limiting Processes with Different Pollinators

    PubMed Central

    Williams, Paul H.; Vaissière, Bernard E.; Zhou, Zhiyong; Gai, Qinbao; Dong, Jie; An, Jiandong

    2015-01-01

    Peach Prunus persica (L.) Batsch is self-compatible and largely self-fertile, but under greenhouse conditions pollinators must be introduced to achieve good fruit set and quality. Because little work has been done to assess the effectiveness of different pollinators on peach trees under greenhouse conditions, we studied ‘Okubo’ peach in greenhouse tunnels near Beijing between 2012 and 2014. We measured pollen deposition, pollen-tube growth rates, ovary development, and initial fruit set after the flowers were visited by either of two managed pollinators: bumblebees, Bombus patagiatus Nylander, and honeybees, Apis mellifera L. The results show that B. patagiatus is more effective than A. mellifera as a pollinator of peach in greenhouses because of differences in two processes. First, B. patagiatus deposits more pollen grains on peach stigmas than A. mellifera, both during a single visit and during a whole day of open pollination. Second, there are differences in the fertilization performance of the pollen deposited. Half of the flowers visited by B. patagiatus are fertilized 9–11 days after bee visits, while for flowers visited by A. mellifera, half are fertilized 13–15 days after bee visits. Consequently, fruit development is also accelerated by bumblebees, showing that the different pollinators have not only different pollination efficiency, but also influence the subsequent time course of fertilization and fruit set. Flowers visited by B. patagiatus show faster ovary growth and ultimately these flowers produce more fruit. Our work shows that pollinators may influence fruit production beyond the amount of pollen delivered. We show that managed indigenous bumblebees significantly outperform introduced honeybees in increasing peach initial fruit set under greenhouse conditions. PMID:25799170

  18. Structured bilaminar co-culture outperforms stem cells and disc cells in a simulated degenerate disc environment

    PubMed Central

    Allon, Aliza A.; Butcher, Kristin; Schneider, Richard A.; Lotz, Jeffrey C.

    2011-01-01

    Study Design This study explores the use of bilaminar coculture pellets of mesenchymal stem cells (MSC) and Nucleus Pulposus cells (NPC) as a cell-based therapy for intervertebral disc regeneration. The pellets were tested under conditions that mimic the degenerative disc. Objective Our goal is to optimize our cell-based therapy in vitro under conditions representative of the eventual diseased tissue. Summary of Background Data Harnessing the potential of stem cells is an important strategy for regenerative medicine. Our approach seeks to direct the behavior of stem cells by mimicking embryonic processes underlying cartilage and intervertebral disc development. Prior experiments have shown that bilaminar co-culture can help differentiate MSC and substantially improve new matrix deposition. Methods We have designed a novel spherical bilaminar cell pellet (BCP) where MSC are enclosed in a shell of NPC. There were three groups: MSC, NPC, and BCP. The pellets were tested under three different culture conditions: in a bioreactor that provides pressure & hypoxia (mimicking normal disc conditions), with inflammatory cytokines (IL-1b and TNF-a), and a bioreactor with inflammation (mimicking painful disc conditions). Results When cultured in the bioreactor, the NPC pellets produced significantly more glycosaminoglycan (GAG)/cell than the other groups: 70-80% more than the BCP and MSC alone. When cultured in an inflammatory environment, the MSC and BCP groups produced 30-34% more GAG/cell than NPC (p<0.05). When the pellets were cultured in a bioreactor with inflammation, the BCP made 25% more GAG/cell than MSC and 57% more than NPC (p<0.05). Conclusion This study shows that BCP outperform controls in a simulated degenerated disc environment. Adapting inductive mechanisms from development to trigger differentiation and restore diseased tissue has many advantages. As opposed to strategies that require growth factor supplements or genetic manipulations, our method is self

  19. Does Cognitive Behavioral Therapy for Youth Anxiety Outperform Usual Care in Community Clinics? An Initial Effectiveness Test

    ERIC Educational Resources Information Center

    Southam-Gerow, Michael A.; Weisz, John R.; Chu, Brian C.; McLeod, Bryce D.; Gordis, Elana B.; Connor-Smith, Jennifer K.

    2010-01-01

    Objective: Most tests of cognitive behavioral therapy (CBT) for youth anxiety disorders have shown beneficial effects, but these have been efficacy trials with recruited youths treated by researcher-employed therapists. One previous (nonrandomized) trial in community clinics found that CBT did not outperform usual care (UC). The present study used…

  20. After Two Years, Three Elementary Math Curricula Outperform a Fourth. NCEE Technical Appendix. NCEE 2013-4019

    ERIC Educational Resources Information Center

    Agodini, Roberto; Harris, Barbara; Remillard, Janine; Thomas, Melissa

    2013-01-01

    This appendix provides the details that underlie the analyses reported in the evaluation brief, "After Two Years, Three Elementary Math Curricula Outperform a Fourth." The details are organized in six sections: Study Curricula and Design (Section A), Data Collection (Section B), Construction of the Analysis File (Section C), Curriculum Effects on…

  1. The Chopthin Algorithm for Resampling

    NASA Astrophysics Data System (ADS)

    Gandy, Axel; Lau, F. Din-Houn

    2016-08-01

    Resampling is a standard step in particle filters and more generally sequential Monte Carlo methods. We present an algorithm, called chopthin, for resampling weighted particles. In contrast to standard resampling methods the algorithm does not produce a set of equally weighted particles; instead it merely enforces an upper bound on the ratio between the weights. Simulation studies show that the chopthin algorithm consistently outperforms standard resampling methods. The algorithms chops up particles with large weight and thins out particles with low weight, hence its name. It implicitly guarantees a lower bound on the effective sample size. The algorithm can be implemented efficiently, making it practically useful. We show that the expected computational effort is linear in the number of particles. Implementations for C++, R (on CRAN), Python and Matlab are available.

  2. Highly efficient codec based on significance-linked connected-component analysis of wavelet coefficients

    NASA Astrophysics Data System (ADS)

    Chai, Bing-Bing; Vass, Jozsef; Zhuang, Xinhua

    1997-04-01

    Recent success in wavelet coding is mainly attributed to the recognition of importance of data organization. There has been several very competitive wavelet codecs developed, namely, Shapiro's Embedded Zerotree Wavelets (EZW), Servetto et. al.'s Morphological Representation of Wavelet Data (MRWD), and Said and Pearlman's Set Partitioning in Hierarchical Trees (SPIHT). In this paper, we propose a new image compression algorithm called Significant-Linked Connected Component Analysis (SLCCA) of wavelet coefficients. SLCCA exploits both within-subband clustering of significant coefficients and cross-subband dependency in significant fields. A so-called significant link between connected components is designed to reduce the positional overhead of MRWD. In addition, the significant coefficients' magnitude are encoded in bit plane order to match the probability model of the adaptive arithmetic coder. Experiments show that SLCCA outperforms both EZW and MRWD, and is tied with SPIHT. Furthermore, it is observed that SLCCA generally has the best performance on images with large portion of texture. When applied to fingerprint image compression, it outperforms FBI's wavelet scalar quantization by about 1 dB.

  3. A low complexity reweighted proportionate affine projection algorithm with memory and row action projection

    NASA Astrophysics Data System (ADS)

    Liu, Jianming; Grant, Steven L.; Benesty, Jacob

    2015-12-01

    A new reweighted proportionate affine projection algorithm (RPAPA) with memory and row action projection (MRAP) is proposed in this paper. The reweighted PAPA is derived from a family of sparseness measures, which demonstrate performance similar to mu-law and the l 0 norm PAPA but with lower computational complexity. The sparseness of the channel is taken into account to improve the performance for dispersive system identification. Meanwhile, the memory of the filter's coefficients is combined with row action projections (RAP) to significantly reduce computational complexity. Simulation results demonstrate that the proposed RPAPA MRAP algorithm outperforms both the affine projection algorithm (APA) and PAPA, and has performance similar to l 0 PAPA and mu-law PAPA, in terms of convergence speed and tracking ability. Meanwhile, the proposed RPAPA MRAP has much lower computational complexity than PAPA, mu-law PAPA, and l 0 PAPA, etc., which makes it very appealing for real-time implementation.

  4. C-element: a new clustering algorithm to find high quality functional modules in PPI networks.

    PubMed

    Ghasemi, Mahdieh; Rahgozar, Maseud; Bidkhori, Gholamreza; Masoudi-Nejad, Ali

    2013-01-01

    Graph clustering algorithms are widely used in the analysis of biological networks. Extracting functional modules in protein-protein interaction (PPI) networks is one such use. Most clustering algorithms whose focuses are on finding functional modules try either to find a clique like sub networks or to grow clusters starting from vertices with high degrees as seeds. These algorithms do not make any difference between a biological network and any other networks. In the current research, we present a new procedure to find functional modules in PPI networks. Our main idea is to model a biological concept and to use this concept for finding good functional modules in PPI networks. In order to evaluate the quality of the obtained clusters, we compared the results of our algorithm with those of some other widely used clustering algorithms on three high throughput PPI networks from Sacchromyces Cerevisiae, Homo sapiens and Caenorhabditis elegans as well as on some tissue specific networks. Gene Ontology (GO) analyses were used to compare the results of different algorithms. Each algorithm's result was then compared with GO-term derived functional modules. We also analyzed the effect of using tissue specific networks on the quality of the obtained clusters. The experimental results indicate that the new algorithm outperforms most of the others, and this improvement is more significant when tissue specific networks are used. PMID:24039752

  5. Improved DTI registration allows voxel-based analysis that outperforms tract-based spatial statistics.

    PubMed

    Schwarz, Christopher G; Reid, Robert I; Gunter, Jeffrey L; Senjem, Matthew L; Przybelski, Scott A; Zuk, Samantha M; Whitwell, Jennifer L; Vemuri, Prashanthi; Josephs, Keith A; Kantarci, Kejal; Thompson, Paul M; Petersen, Ronald C; Jack, Clifford R

    2014-07-01

    Tract-Based Spatial Statistics (TBSS) is a popular software pipeline to coregister sets of diffusion tensor Fractional Anisotropy (FA) images for performing voxel-wise comparisons. It is primarily defined by its skeleton projection step intended to reduce effects of local misregistration. A white matter "skeleton" is computed by morphological thinning of the inter-subject mean FA, and then all voxels are projected to the nearest location on this skeleton. Here we investigate several enhancements to the TBSS pipeline based on recent advances in registration for other modalities, principally based on groupwise registration with the ANTS-SyN algorithm. We validate these enhancements using simulation experiments with synthetically-modified images. When used with these enhancements, we discover that TBSS's skeleton projection step actually reduces algorithm accuracy, as the improved registration leaves fewer errors to warrant correction, and the effects of this projection's compromises become stronger than those of its benefits. In our experiments, our proposed pipeline without skeleton projection is more sensitive for detecting true changes and has greater specificity in resisting false positives from misregistration. We also present comparative results of the proposed and traditional methods, both with and without the skeleton projection step, on three real-life datasets: two comparing differing populations of Alzheimer's disease patients to matched controls, and one comparing progressive supranuclear palsy patients to matched controls. The proposed pipeline produces more plausible results according to each disease's pathophysiology. PMID:24650605

  6. Outperforming whom? A multilevel study of performance-prove goal orientation, performance, and the moderating role of shared team identification.

    PubMed

    Dietz, Bart; van Knippenberg, Daan; Hirst, Giles; Restubog, Simon Lloyd D

    2015-11-01

    Performance-prove goal orientation affects performance because it drives people to try to outperform others. A proper understanding of the performance-motivating potential of performance-prove goal orientation requires, however, that we consider the question of whom people desire to outperform. In a multilevel analysis of this issue, we propose that the shared team identification of a team plays an important moderating role here, directing the performance-motivating influence of performance-prove goal orientation to either the team level or the individual level of performance. A multilevel study of salespeople nested in teams supports this proposition, showing that performance-prove goal orientation motivates team performance more with higher shared team identification, whereas performance-prove goal orientation motivates individual performance more with lower shared team identification. Establishing the robustness of these findings, a second study replicates them with individual and team performance in an educational context. PMID:26011723

  7. Performance appraisal of estimation algorithms and application of estimation algorithms to target tracking

    NASA Astrophysics Data System (ADS)

    Zhao, Zhanlue

    This dissertation consists of two parts. The first part deals with the performance appraisal of estimation algorithms. The second part focuses on the application of estimation algorithms to target tracking. Performance appraisal is crucial for understanding, developing and comparing various estimation algorithms. In particular, with the evolvement of estimation theory and the increase of problem complexity, performance appraisal is getting more and more challenging for engineers to make comprehensive conclusions. However, the existing theoretical results are inadequate for practical reference. The first part of this dissertation is dedicated to performance measures which include local performance measures, global performance measures and model distortion measure. The second part focuses on application of the recursive best linear unbiased estimation (BLUE) or linear minimum mean square error (LIB-M-ISE) estimation to nonlinear measurement problem in target tracking. Kalman filter has been the dominant basis for dynamic state filtering for several decades. Beyond Kalman filter, a more fundamental basis for the recursive best linear unbiased filtering has been thoroughly investigated in a series of papers by my advisor Dr. X. Rong Li. Based on the so-called quasi-recursive best linear unbiased filtering technique, the constraints of the Kalman filter Linear-Gaussian assumptions can be relaxed such that a general linear filtering technique for nonlinear systems can be achieved. An approximate optimal BLUE filter is implemented for nonlinear measurements in target tracking which outperforms the existing method significantly in terms of accuracy, credibility and robustness.

  8. A novel community health worker tool outperforms WHO clinical staging for assessment of antiretroviral therapy eligibility in a resource-limited setting.

    PubMed

    Macpherson, Peter; Lalloo, David G; Thindwa, Deus; Webb, Emily L; Squire, S Bertel; Chipungu, Geoffrey A; Desmond, Nicola; Makombe, Simon D; Taegtmeyer, Miriam; Choko, Augustine T; Corbett, Elizabeth L

    2014-02-01

    The accuracy of a novel community health worker antiretroviral therapy eligibility assessment tool was examined in community members in Blantyre, Malawi. Nurses independently performed World Health Organization (WHO) staging and CD4 counts. One hundred ten (55.6%) of 198 HIV-positive participants had a CD4 count of <350 cells per cubic millimeter. The community health worker tool significantly outperformed WHO clinical staging in identifying CD4 count of <350 cells per cubic millimeter in terms of sensitivity (41% vs. 19%), positive predictive value (75% vs. 68%), negative predictive values (53% vs. 47%), and area under the receiver-operator curve (0.62 vs. 0.54; P = 0.017). Reliance on WHO staging is likely to result in missed and delayed antiretroviral therapy initiation. PMID:23846567

  9. 3D printed cellular solid outperforms traditional stochastic foam in long-term mechanical response

    NASA Astrophysics Data System (ADS)

    Maiti, A.; Small, W.; Lewicki, J. P.; Weisgraber, T. H.; Duoss, E. B.; Chinn, S. C.; Pearson, M. A.; Spadaccini, C. M.; Maxwell, R. S.; Wilson, T. S.

    2016-04-01

    3D printing of polymeric foams by direct-ink-write is a recent technological breakthrough that enables the creation of versatile compressible solids with programmable microstructure, customizable shapes, and tunable mechanical response including negative elastic modulus. However, in many applications the success of these 3D printed materials as a viable replacement for traditional stochastic foams critically depends on their mechanical performance and micro-architectural stability while deployed under long-term mechanical strain. To predict the long-term performance of the two types of foams we employed multi-year-long accelerated aging studies under compressive strain followed by a time-temperature-superposition analysis using a minimum-arc-length-based algorithm. The resulting master curves predict superior long-term performance of the 3D printed foam in terms of two different metrics, i.e., compression set and load retention. To gain deeper understanding, we imaged the microstructure of both foams using X-ray computed tomography, and performed finite-element analysis of the mechanical response within these microstructures. This indicates a wider stress variation in the stochastic foam with points of more extreme local stress as compared to the 3D printed material, which might explain the latter’s improved long-term stability and mechanical performance.

  10. 3D printed cellular solid outperforms traditional stochastic foam in long-term mechanical response.

    PubMed

    Maiti, A; Small, W; Lewicki, J P; Weisgraber, T H; Duoss, E B; Chinn, S C; Pearson, M A; Spadaccini, C M; Maxwell, R S; Wilson, T S

    2016-01-01

    3D printing of polymeric foams by direct-ink-write is a recent technological breakthrough that enables the creation of versatile compressible solids with programmable microstructure, customizable shapes, and tunable mechanical response including negative elastic modulus. However, in many applications the success of these 3D printed materials as a viable replacement for traditional stochastic foams critically depends on their mechanical performance and micro-architectural stability while deployed under long-term mechanical strain. To predict the long-term performance of the two types of foams we employed multi-year-long accelerated aging studies under compressive strain followed by a time-temperature-superposition analysis using a minimum-arc-length-based algorithm. The resulting master curves predict superior long-term performance of the 3D printed foam in terms of two different metrics, i.e., compression set and load retention. To gain deeper understanding, we imaged the microstructure of both foams using X-ray computed tomography, and performed finite-element analysis of the mechanical response within these microstructures. This indicates a wider stress variation in the stochastic foam with points of more extreme local stress as compared to the 3D printed material, which might explain the latter's improved long-term stability and mechanical performance. PMID:27117858

  11. 3D printed cellular solid outperforms traditional stochastic foam in long-term mechanical response

    DOE PAGESBeta

    Maiti, A.; Small, W.; Lewicki, J.; Weisgraber, T. H.; Duoss, E. B.; Chinn, S. C.; Pearson, M. A.; Spadaccini, C. M.; Maxwell, R. S.; Wilson, T. S.

    2016-04-27

    3D printing of polymeric foams by direct-ink-write is a recent technological breakthrough that enables the creation of versatile compressible solids with programmable microstructure, customizable shapes, and tunable mechanical response including negative elastic modulus. However, in many applications the success of these 3D printed materials as a viable replacement for traditional stochastic foams critically depends on their mechanical performance and micro-architectural stability while deployed under long-term mechanical strain. To predict the long-term performance of the two types of foams we employed multi-year-long accelerated aging studies under compressive strain followed by a time-temperature-superposition analysis using a minimum-arc-length-based algorithm. The resulting master curvesmore » predict superior long-term performance of the 3D printed foam in terms of two different metrics, i.e., compression set and load retention. To gain deeper understanding, we imaged the microstructure of both foams using X-ray computed tomography, and performed finite-element analysis of the mechanical response within these microstructures. As a result, this indicates a wider stress variation in the stochastic foam with points of more extreme local stress as compared to the 3D printed material, which might explain the latter’s improved long-term stability and mechanical performance.« less

  12. 3D printed cellular solid outperforms traditional stochastic foam in long-term mechanical response

    PubMed Central

    Maiti, A.; Small, W.; Lewicki, J. P.; Weisgraber, T. H.; Duoss, E. B.; Chinn, S. C.; Pearson, M. A.; Spadaccini, C. M.; Maxwell, R. S.; Wilson, T. S.

    2016-01-01

    3D printing of polymeric foams by direct-ink-write is a recent technological breakthrough that enables the creation of versatile compressible solids with programmable microstructure, customizable shapes, and tunable mechanical response including negative elastic modulus. However, in many applications the success of these 3D printed materials as a viable replacement for traditional stochastic foams critically depends on their mechanical performance and micro-architectural stability while deployed under long-term mechanical strain. To predict the long-term performance of the two types of foams we employed multi-year-long accelerated aging studies under compressive strain followed by a time-temperature-superposition analysis using a minimum-arc-length-based algorithm. The resulting master curves predict superior long-term performance of the 3D printed foam in terms of two different metrics, i.e., compression set and load retention. To gain deeper understanding, we imaged the microstructure of both foams using X-ray computed tomography, and performed finite-element analysis of the mechanical response within these microstructures. This indicates a wider stress variation in the stochastic foam with points of more extreme local stress as compared to the 3D printed material, which might explain the latter’s improved long-term stability and mechanical performance. PMID:27117858

  13. Optimal classification of standoff bioaerosol measurements using evolutionary algorithms

    NASA Astrophysics Data System (ADS)

    Nyhavn, Ragnhild; Moen, Hans J. F.; Farsund, Øystein; Rustad, Gunnar

    2011-05-01

    Early warning systems based on standoff detection of biological aerosols require real-time signal processing of a large quantity of high-dimensional data, challenging the systems efficiency in terms of both computational complexity and classification accuracy. Hence, optimal feature selection is essential in forming a stable and efficient classification system. This involves finding optimal signal processing parameters, characteristic spectral frequencies and other data transformations in large magnitude variable space, stating the need for an efficient and smart search algorithm. Evolutionary algorithms are population-based optimization methods inspired by Darwinian evolutionary theory. These methods focus on application of selection, mutation and recombination on a population of competing solutions and optimize this set by evolving the population of solutions for each generation. We have employed genetic algorithms in the search for optimal feature selection and signal processing parameters for classification of biological agents. The experimental data were achieved with a spectrally resolved lidar based on ultraviolet laser induced fluorescence, and included several releases of 5 common simulants. The genetic algorithm outperform benchmark methods involving analytic, sequential and random methods like support vector machines, Fisher's linear discriminant and principal component analysis, with significantly improved classification accuracy compared to the best classical method.

  14. An Efficient Supervised Training Algorithm for Multilayer Spiking Neural Networks

    PubMed Central

    Xie, Xiurui; Qu, Hong; Liu, Guisong; Zhang, Malu; Kurths, Jürgen

    2016-01-01

    The spiking neural networks (SNNs) are the third generation of neural networks and perform remarkably well in cognitive tasks such as pattern recognition. The spike emitting and information processing mechanisms found in biological cognitive systems motivate the application of the hierarchical structure and temporal encoding mechanism in spiking neural networks, which have exhibited strong computational capability. However, the hierarchical structure and temporal encoding approach require neurons to process information serially in space and time respectively, which reduce the training efficiency significantly. For training the hierarchical SNNs, most existing methods are based on the traditional back-propagation algorithm, inheriting its drawbacks of the gradient diffusion and the sensitivity on parameters. To keep the powerful computation capability of the hierarchical structure and temporal encoding mechanism, but to overcome the low efficiency of the existing algorithms, a new training algorithm, the Normalized Spiking Error Back Propagation (NSEBP) is proposed in this paper. In the feedforward calculation, the output spike times are calculated by solving the quadratic function in the spike response model instead of detecting postsynaptic voltage states at all time points in traditional algorithms. Besides, in the feedback weight modification, the computational error is propagated to previous layers by the presynaptic spike jitter instead of the gradient decent rule, which realizes the layer-wised training. Furthermore, our algorithm investigates the mathematical relation between the weight variation and voltage error change, which makes the normalization in the weight modification applicable. Adopting these strategies, our algorithm outperforms the traditional SNN multi-layer algorithms in terms of learning efficiency and parameter sensitivity, that are also demonstrated by the comprehensive experimental results in this paper. PMID:27044001

  15. An Efficient Supervised Training Algorithm for Multilayer Spiking Neural Networks.

    PubMed

    Xie, Xiurui; Qu, Hong; Liu, Guisong; Zhang, Malu; Kurths, Jürgen

    2016-01-01

    The spiking neural networks (SNNs) are the third generation of neural networks and perform remarkably well in cognitive tasks such as pattern recognition. The spike emitting and information processing mechanisms found in biological cognitive systems motivate the application of the hierarchical structure and temporal encoding mechanism in spiking neural networks, which have exhibited strong computational capability. However, the hierarchical structure and temporal encoding approach require neurons to process information serially in space and time respectively, which reduce the training efficiency significantly. For training the hierarchical SNNs, most existing methods are based on the traditional back-propagation algorithm, inheriting its drawbacks of the gradient diffusion and the sensitivity on parameters. To keep the powerful computation capability of the hierarchical structure and temporal encoding mechanism, but to overcome the low efficiency of the existing algorithms, a new training algorithm, the Normalized Spiking Error Back Propagation (NSEBP) is proposed in this paper. In the feedforward calculation, the output spike times are calculated by solving the quadratic function in the spike response model instead of detecting postsynaptic voltage states at all time points in traditional algorithms. Besides, in the feedback weight modification, the computational error is propagated to previous layers by the presynaptic spike jitter instead of the gradient decent rule, which realizes the layer-wised training. Furthermore, our algorithm investigates the mathematical relation between the weight variation and voltage error change, which makes the normalization in the weight modification applicable. Adopting these strategies, our algorithm outperforms the traditional SNN multi-layer algorithms in terms of learning efficiency and parameter sensitivity, that are also demonstrated by the comprehensive experimental results in this paper. PMID:27044001

  16. Complexity of the Quantum Adiabatic Algorithm

    NASA Technical Reports Server (NTRS)

    Hen, Itay

    2013-01-01

    The Quantum Adiabatic Algorithm (QAA) has been proposed as a mechanism for efficiently solving optimization problems on a quantum computer. Since adiabatic computation is analog in nature and does not require the design and use of quantum gates, it can be thought of as a simpler and perhaps more profound method for performing quantum computations that might also be easier to implement experimentally. While these features have generated substantial research in QAA, to date there is still a lack of solid evidence that the algorithm can outperform classical optimization algorithms.

  17. An MM-Based Algorithm for ℓ1-Regularized Least-Squares Estimation With an Application to Ground Penetrating Radar Image Reconstruction.

    PubMed

    Ndoye, Mandoye; Anderson, John M M; Greene, David J

    2016-05-01

    An estimation method known as least absolute shrinkage and selection operator (LASSO) or ℓ1-regularized LS estimation has been found to perform well in a number of applications. In this paper, we use the majorize-minimize method to develop an algorithm for minimizing the LASSO objective function, which is the sum of a linear LS objective function plus an ℓ1 penalty term. The proposed algorithm, which we call the LASSO estimation via majorization-minimization (LMM) algorithm, is straightforward to implement, parallelizable, and guaranteed to produce LASSO objective function values that monotonically decrease. In addition, we formulate an extension of the LMM algorithm for reconstructing ground penetrating radar (GPR) images, that is much faster than the standard LMM algorithm and utilizes significantly less memory. Thus, the GPR specific LMM (GPR-LMM) algorithm is able to accommodate the big data associated with GPR imaging. We compare our proposed algorithms to the state-of-the-art ℓ1-regularized LS algorithms using a time and space complexity analysis. The GPR-LMM greatly outperforms the competing algorithms in terms of the performance metrics we considered. In addition, the reconstruction results of the standard LMM and GPR-LMM algorithms are evaluated using both simulated and real GPR data. PMID:26800538

  18. An Intelligent Model for Pairs Trading Using Genetic Algorithms

    PubMed Central

    Huang, Chien-Feng; Hsu, Chi-Jen; Chen, Chi-Chung; Chang, Bao Rong; Li, Chen-An

    2015-01-01

    Pairs trading is an important and challenging research area in computational finance, in which pairs of stocks are bought and sold in pair combinations for arbitrage opportunities. Traditional methods that solve this set of problems mostly rely on statistical methods such as regression. In contrast to the statistical approaches, recent advances in computational intelligence (CI) are leading to promising opportunities for solving problems in the financial applications more effectively. In this paper, we present a novel methodology for pairs trading using genetic algorithms (GA). Our results showed that the GA-based models are able to significantly outperform the benchmark and our proposed method is capable of generating robust models to tackle the dynamic characteristics in the financial application studied. Based upon the promising results obtained, we expect this GA-based method to advance the research in computational intelligence for finance and provide an effective solution to pairs trading for investment in practice. PMID:26339236

  19. An Intelligent Model for Pairs Trading Using Genetic Algorithms.

    PubMed

    Huang, Chien-Feng; Hsu, Chi-Jen; Chen, Chi-Chung; Chang, Bao Rong; Li, Chen-An

    2015-01-01

    Pairs trading is an important and challenging research area in computational finance, in which pairs of stocks are bought and sold in pair combinations for arbitrage opportunities. Traditional methods that solve this set of problems mostly rely on statistical methods such as regression. In contrast to the statistical approaches, recent advances in computational intelligence (CI) are leading to promising opportunities for solving problems in the financial applications more effectively. In this paper, we present a novel methodology for pairs trading using genetic algorithms (GA). Our results showed that the GA-based models are able to significantly outperform the benchmark and our proposed method is capable of generating robust models to tackle the dynamic characteristics in the financial application studied. Based upon the promising results obtained, we expect this GA-based method to advance the research in computational intelligence for finance and provide an effective solution to pairs trading for investment in practice. PMID:26339236

  20. 1/f Noise Outperforms White Noise in Sensitizing Baroreflex Function in the Human Brain

    NASA Astrophysics Data System (ADS)

    Soma, Rika; Nozaki, Daichi; Kwak, Shin; Yamamoto, Yoshiharu

    2003-08-01

    We show that externally added 1/f noise more effectively sensitizes the baroreflex centers in the human brain than white noise. We examined the compensatory heart rate response to a weak periodic signal introduced via venous blood pressure receptors while adding 1/f or white noise with the same variance to the brain stem through bilateral cutaneous stimulation of the vestibular afferents. In both cases, this noisy galvanic vestibular stimulation optimized covariance between the weak input signals and the heart rate responses. However, the optimal level with 1/f noise was significantly lower than with white noise, suggesting a functional benefit of 1/f noise for neuronal information transfer in the brain.

  1. Identification of coherent patterns in gene expression data using an efficient biclustering algorithm and parallel coordinate visualization

    PubMed Central

    Cheng, Kin-On; Law, Ngai-Fong; Siu, Wan-Chi; Liew, Alan Wee-Chung

    2008-01-01

    Background The DNA microarray technology allows the measurement of expression levels of thousands of genes under tens/hundreds of different conditions. In microarray data, genes with similar functions usually co-express under certain conditions only [1]. Thus, biclustering which clusters genes and conditions simultaneously is preferred over the traditional clustering technique in discovering these coherent genes. Various biclustering algorithms have been developed using different bicluster formulations. Unfortunately, many useful formulations result in NP-complete problems. In this article, we investigate an efficient method for identifying a popular type of biclusters called additive model. Furthermore, parallel coordinate (PC) plots are used for bicluster visualization and analysis. Results We develop a novel and efficient biclustering algorithm which can be regarded as a greedy version of an existing algorithm known as pCluster algorithm. By relaxing the constraint in homogeneity, the proposed algorithm has polynomial-time complexity in the worst case instead of exponential-time complexity as in the pCluster algorithm. Experiments on artificial datasets verify that our algorithm can identify both additive-related and multiplicative-related biclusters in the presence of overlap and noise. Biologically significant biclusters have been validated on the yeast cell-cycle expression dataset using Gene Ontology annotations. Comparative study shows that the proposed approach outperforms several existing biclustering algorithms. We also provide an interactive exploratory tool based on PC plot visualization for determining the parameters of our biclustering algorithm. Conclusion We have proposed a novel biclustering algorithm which works with PC plots for an interactive exploratory analysis of gene expression data. Experiments show that the biclustering algorithm is efficient and is capable of detecting co-regulated genes. The interactive analysis enables an optimum

  2. Modified OMP Algorithm for Exponentially Decaying Signals

    PubMed Central

    Kazimierczuk, Krzysztof; Kasprzak, Paweł

    2015-01-01

    A group of signal reconstruction methods, referred to as compressed sensing (CS), has recently found a variety of applications in numerous branches of science and technology. However, the condition of the applicability of standard CS algorithms (e.g., orthogonal matching pursuit, OMP), i.e., the existence of the strictly sparse representation of a signal, is rarely met. Thus, dedicated algorithms for solving particular problems have to be developed. In this paper, we introduce a modification of OMP motivated by nuclear magnetic resonance (NMR) application of CS. The algorithm is based on the fact that the NMR spectrum consists of Lorentzian peaks and matches a single Lorentzian peak in each of its iterations. Thus, we propose the name Lorentzian peak matching pursuit (LPMP). We also consider certain modification of the algorithm by introducing the allowed positions of the Lorentzian peaks' centers. Our results show that the LPMP algorithm outperforms other CS algorithms when applied to exponentially decaying signals. PMID:25609044

  3. YAMPA: Yet Another Matching Pursuit Algorithm for compressive sensing

    NASA Astrophysics Data System (ADS)

    Lodhi, Muhammad A.; Voronin, Sergey; Bajwa, Waheed U.

    2016-05-01

    State-of-the-art sparse recovery methods often rely on the restricted isometry property for their theoretical guarantees. However, they cannot explicitly incorporate metrics such as restricted isometry constants within their recovery procedures due to the computational intractability of calculating such metrics. This paper formulates an iterative algorithm, termed yet another matching pursuit algorithm (YAMPA), for recovery of sparse signals from compressive measurements. YAMPA differs from other pursuit algorithms in that: (i) it adapts to the measurement matrix using a threshold that is explicitly dependent on two computable coherence metrics of the matrix, and (ii) it does not require knowledge of the signal sparsity. Performance comparisons of YAMPA against other matching pursuit and approximate message passing algorithms are made for several types of measurement matrices. These results show that while state-of-the-art approximate message passing algorithms outperform other algorithms (including YAMPA) in the case of well-conditioned random matrices, they completely break down in the case of ill-conditioned measurement matrices. On the other hand, YAMPA and comparable pursuit algorithms not only result in reasonable performance for well-conditioned matrices, but their performance also degrades gracefully for ill-conditioned matrices. The paper also shows that YAMPA uniformly outperforms other pursuit algorithms for the case of thresholding parameters chosen in a clairvoyant fashion. Further, when combined with a simple and fast technique for selecting thresholding parameters in the case of ill-conditioned matrices, YAMPA outperforms other pursuit algorithms in the regime of low undersampling, although some of these algorithms can outperform YAMPA in the regime of high undersampling in this setting.

  4. Microscopy outperformed in a comparison of five methods for detecting Trichomonas vaginalis in symptomatic women.

    PubMed

    Nathan, B; Appiah, J; Saunders, P; Heron, D; Nichols, T; Brum, R; Alexander, S; Baraitser, P; Ison, C

    2015-03-01

    In the UK, despite its low sensitivity, wet mount microscopy is often the only method of detecting Trichomonas vaginalis infection. A study was conducted in symptomatic women to compare the performance of five methods for detecting T. vaginalis: an in-house polymerase chain reaction (PCR); Aptima T. vaginalis kit; OSOM ®Trichomonas Rapid Test; culture and microscopy. Symptomatic women underwent routine testing; microscopy and further swabs were taken for molecular testing, OSOM and culture. A true positive was defined as a sample that was positive for T. vaginalis by two or more different methods. Two hundred and forty-six women were recruited: 24 patients were positive for T. vaginalis by two or more different methods. Of these 24 patients, 21 patients were detected by real-time PCR (sensitivity 88%); 22 patients were detected by the Aptima T. vaginalis kit (sensitivity 92%); 22 patients were detected by OSOM (sensitivity 92%); nine were detected by wet mount microscopy (sensitivity 38%); and 21 were detected by culture (sensitivity 88%). Two patients were positive by just one method and were not considered true positives. All the other detection methods had a sensitivity to detect T. vaginalis that was significantly greater than wet mount microscopy, highlighting the number of cases that are routinely missed even in symptomatic women if microscopy is the only diagnostic method available. PMID:24855131

  5. Single tactile afferents outperform human subjects in a vibrotactile intensity discrimination task.

    PubMed

    Arabzadeh, Ehsan; Clifford, Colin W G; Harris, Justin A; Mahns, David A; Macefield, Vaughan G; Birznieks, Ingvars

    2014-11-15

    We simultaneously compared the sensitivity of single primary afferent neurons supplying the glabrous skin of the hand and the psychophysical amplitude discrimination thresholds in human subjects for a set of vibrotactile stimuli delivered to the receptive field. All recorded afferents had a dynamic range narrower than the range of amplitudes across which the subjects could discriminate. However, when the vibration amplitude was chosen to be within the steepest part of the afferent's stimulus-response function the response of single afferents, defined as the spike count over the vibration duration (500 ms), was often more sensitive in discriminating vibration amplitude than the perceptual judgment of the participants. We quantified how the neuronal performance depended on the integration window: for short windows the neuronal performance was inferior to the performance of the subject. The neuronal performance progressively improved with increasing spike count duration and reached a level significantly above that of the subjects when the integration window was 250 ms or longer. The superiority in performance of individual neurons over observers could reflect a nonoptimal integration window or be due to the presence of noise between the sensory periphery and the cortical decision stage. Additionally, it could indicate that the range of perceptual sensitivity comes at the cost of discrimination through pooling across neurons with different response functions. PMID:25143540

  6. Protein-fold recognition using an improved single-source K diverse shortest paths algorithm.

    PubMed

    Lhota, John; Xie, Lei

    2016-04-01

    Protein structure prediction, when construed as a fold recognition problem, is one of the most important applications of similarity search in bioinformatics. A new protein-fold recognition method is reported which combines a single-source K diverse shortest path (SSKDSP) algorithm with Enrichment of Network Topological Similarity (ENTS) algorithm to search a graphic feature space generated using sequence similarity and structural similarity metrics. A modified, more efficient SSKDSP algorithm is developed to improve the performance of graph searching. The new implementation of the SSKDSP algorithm empirically requires 82% less memory and 61% less time than the current implementation, allowing for the analysis of larger, denser graphs. Furthermore, the statistical significance of fold ranking generated from SSKDSP is assessed using ENTS. The reported ENTS-SSKDSP algorithm outperforms original ENTS that uses random walk with restart for the graph search as well as other state-of-the-art protein structure prediction algorithms HHSearch and Sparks-X, as evaluated by a benchmark of 600 query proteins. The reported methods may easily be extended to other similarity search problems in bioinformatics and chemoinformatics. The SSKDSP software is available at http://compsci.hunter.cuny.edu/~leixie/sskdsp.html. Proteins 2016; 84:467-472. © 2016 Wiley Periodicals, Inc. PMID:26800480

  7. Amphipols Outperform Dodecylmaltoside Micelles in Stabilizing Membrane Protein Structure in the Gas Phase

    PubMed Central

    2014-01-01

    Noncovalent mass spectrometry (MS) is emerging as an invaluable technique to probe the structure, interactions, and dynamics of membrane proteins (MPs). However, maintaining native-like MP conformations in the gas phase using detergent solubilized proteins is often challenging and may limit structural analysis. Amphipols, such as the well characterized A8-35, are alternative reagents able to maintain the solubility of MPs in detergent-free solution. In this work, the ability of A8-35 to retain the structural integrity of MPs for interrogation by electrospray ionization-ion mobility spectrometry-mass spectrometry (ESI-IMS-MS) is compared systematically with the commonly used detergent dodecylmaltoside. MPs from the two major structural classes were selected for analysis, including two β-barrel outer MPs, PagP and OmpT (20.2 and 33.5 kDa, respectively), and two α-helical proteins, Mhp1 and GalP (54.6 and 51.7 kDa, respectively). Evaluation of the rotationally averaged collision cross sections of the observed ions revealed that the native structures of detergent solubilized MPs were not always retained in the gas phase, with both collapsed and unfolded species being detected. In contrast, ESI-IMS-MS analysis of the amphipol solubilized MPs studied resulted in charge state distributions consistent with less gas phase induced unfolding, and the presence of lowly charged ions which exhibit collision cross sections comparable with those calculated from high resolution structural data. The data demonstrate that A8-35 can be more effective than dodecylmaltoside at maintaining native MP structure and interactions in the gas phase, permitting noncovalent ESI-IMS-MS analysis of MPs from the two major structural classes, while gas phase dissociation from dodecylmaltoside micelles leads to significant gas phase unfolding, especially for the α-helical MPs studied. PMID:25495802

  8. Evaluation of common methods for sampling invertebrate pollinator assemblages: net sampling out-perform pan traps.

    PubMed

    Popic, Tony J; Davila, Yvonne C; Wardle, Glenda M

    2013-01-01

    Methods for sampling ecological assemblages strive to be efficient, repeatable, and representative. Unknowingly, common methods may be limited in terms of revealing species function and so of less value for comparative studies. The global decline in pollination services has stimulated surveys of flower-visiting invertebrates, using pan traps and net sampling. We explore the relative merits of these two methods in terms of species discovery, quantifying abundance, function, and composition, and responses of species to changing floral resources. Using a spatially-nested design we sampled across a 5000 km(2) area of arid grasslands, including 432 hours of net sampling and 1296 pan trap-days, between June 2010 and July 2011. Net sampling yielded 22% more species and 30% higher abundance than pan traps, and better reflected the spatio-temporal variation of floral resources. Species composition differed significantly between methods; from 436 total species, 25% were sampled by both methods, 50% only by nets, and the remaining 25% only by pans. Apart from being less comprehensive, if pan traps do not sample flower-visitors, the link to pollination is questionable. By contrast, net sampling functionally linked species to pollination through behavioural observations of flower-visitation interaction frequency. Netted specimens are also necessary for evidence of pollen transport. Benefits of net-based sampling outweighed minor differences in overall sampling effort. As pan traps and net sampling methods are not equivalent for sampling invertebrate-flower interactions, we recommend net sampling of invertebrate pollinator assemblages, especially if datasets are intended to document declines in pollination and guide measures to retain this important ecosystem service. PMID:23799127

  9. Evaluation of prostate segmentation algorithms for MRI: the PROMISE12 challenge.

    PubMed

    Litjens, Geert; Toth, Robert; van de Ven, Wendy; Hoeks, Caroline; Kerkstra, Sjoerd; van Ginneken, Bram; Vincent, Graham; Guillard, Gwenael; Birbeck, Neil; Zhang, Jindang; Strand, Robin; Malmberg, Filip; Ou, Yangming; Davatzikos, Christos; Kirschner, Matthias; Jung, Florian; Yuan, Jing; Qiu, Wu; Gao, Qinquan; Edwards, Philip Eddie; Maan, Bianca; van der Heijden, Ferdinand; Ghose, Soumya; Mitra, Jhimli; Dowling, Jason; Barratt, Dean; Huisman, Henkjan; Madabhushi, Anant

    2014-02-01

    Prostate MRI image segmentation has been an area of intense research due to the increased use of MRI as a modality for the clinical workup of prostate cancer. Segmentation is useful for various tasks, e.g. to accurately localize prostate boundaries for radiotherapy or to initialize multi-modal registration algorithms. In the past, it has been difficult for research groups to evaluate prostate segmentation algorithms on multi-center, multi-vendor and multi-protocol data. Especially because we are dealing with MR images, image appearance, resolution and the presence of artifacts are affected by differences in scanners and/or protocols, which in turn can have a large influence on algorithm accuracy. The Prostate MR Image Segmentation (PROMISE12) challenge was setup to allow a fair and meaningful comparison of segmentation methods on the basis of performance and robustness. In this work we will discuss the initial results of the online PROMISE12 challenge, and the results obtained in the live challenge workshop hosted by the MICCAI2012 conference. In the challenge, 100 prostate MR cases from 4 different centers were included, with differences in scanner manufacturer, field strength and protocol. A total of 11 teams from academic research groups and industry participated. Algorithms showed a wide variety in methods and implementation, including active appearance models, atlas registration and level sets. Evaluation was performed using boundary and volume based metrics which were combined into a single score relating the metrics to human expert performance. The winners of the challenge where the algorithms by teams Imorphics and ScrAutoProstate, with scores of 85.72 and 84.29 overall. Both algorithms where significantly better than all other algorithms in the challenge (p<0.05) and had an efficient implementation with a run time of 8min and 3s per case respectively. Overall, active appearance model based approaches seemed to outperform other approaches like multi

  10. Significant Treasures.

    ERIC Educational Resources Information Center

    Andrews, Ian A.

    1999-01-01

    Provides a crossword puzzle with an answer key corresponding to the book entitled "Significant Treasures/Tresors Parlants" that is filled with color and black-and-white prints of paintings and artifacts from 131 museums and art galleries as a sampling of the 2,200 such Canadian institutions. (CMK)

  11. Learning deterministic finite automata with a smart state labeling evolutionary algorithm.

    PubMed

    Lucas, Simon M; Reynolds, T Jeff

    2005-07-01

    Learning a Deterministic Finite Automaton (DFA) from a training set of labeled strings is a hard task that has been much studied within the machine learning community. It is equivalent to learning a regular language by example and has applications in language modeling. In this paper, we describe a novel evolutionary method for learning DFA that evolves only the transition matrix and uses a simple deterministic procedure to optimally assign state labels. We compare its performance with the Evidence Driven State Merging (EDSM) algorithm, one of the most powerful known DFA learning algorithms. We present results on random DFA induction problems of varying target size and training set density. We also studythe effects of noisy training data on the evolutionary approach and on EDSM. On noise-free data, we find that our evolutionary method outperforms EDSM on small sparse data sets. In the case of noisy training data, we find that our evolutionary method consistently outperforms EDSM, as well as other significant methods submitted to two recent competitions. PMID:16013754

  12. Bayesian methods outperform parsimony but at the expense of precision in the estimation of phylogeny from discrete morphological data.

    PubMed

    O'Reilly, Joseph E; Puttick, Mark N; Parry, Luke; Tanner, Alastair R; Tarver, James E; Fleming, James; Pisani, Davide; Donoghue, Philip C J

    2016-04-01

    Different analytical methods can yield competing interpretations of evolutionary history and, currently, there is no definitive method for phylogenetic reconstruction using morphological data. Parsimony has been the primary method for analysing morphological data, but there has been a resurgence of interest in the likelihood-based Mk-model. Here, we test the performance of the Bayesian implementation of the Mk-model relative to both equal and implied-weight implementations of parsimony. Using simulated morphological data, we demonstrate that the Mk-model outperforms equal-weights parsimony in terms of topological accuracy, and implied-weights performs the most poorly. However, the Mk-model produces phylogenies that have less resolution than parsimony methods. This difference in the accuracy and precision of parsimony and Bayesian approaches to topology estimation needs to be considered when selecting a method for phylogeny reconstruction. PMID:27095266

  13. Bayesian methods outperform parsimony but at the expense of precision in the estimation of phylogeny from discrete morphological data

    PubMed Central

    Puttick, Mark N.; Parry, Luke; Tanner, Alastair R.; Tarver, James E.; Fleming, James

    2016-01-01

    Different analytical methods can yield competing interpretations of evolutionary history and, currently, there is no definitive method for phylogenetic reconstruction using morphological data. Parsimony has been the primary method for analysing morphological data, but there has been a resurgence of interest in the likelihood-based Mk-model. Here, we test the performance of the Bayesian implementation of the Mk-model relative to both equal and implied-weight implementations of parsimony. Using simulated morphological data, we demonstrate that the Mk-model outperforms equal-weights parsimony in terms of topological accuracy, and implied-weights performs the most poorly. However, the Mk-model produces phylogenies that have less resolution than parsimony methods. This difference in the accuracy and precision of parsimony and Bayesian approaches to topology estimation needs to be considered when selecting a method for phylogeny reconstruction. PMID:27095266

  14. How resilient are resilience scales? The Big Five scales outperform resilience scales in predicting adjustment in adolescents.

    PubMed

    Waaktaar, Trine; Torgersen, Svenn

    2010-04-01

    This study's aim was to determine whether resilience scales could predict adjustment over and above that predicted by the five-factor model (FFM). A sample of 1,345 adolescents completed paper-and-pencil scales on FFM personality (Hierarchical Personality Inventory for Children), resilience (Ego-Resiliency Scale [ER89] by Block & Kremen, the Resilience Scale [RS] by Wagnild & Young) and adaptive behaviors (California Healthy Kids Survey, UCLA Loneliness Scale and three measures of school adaptation). The results showed that the FFM scales accounted for the highest proportion of variance in disturbance. For adaptation, the resilience scales contributed as much as the FFM. In no case did the resilience scales outperform the FFM by increasing the explained variance. The results challenge the validity of the resilience concept as an indicator of human adaptation and avoidance of disturbance, although the concept may have heuristic value in combining favorable aspects of a person's personality endowment. PMID:19961558

  15. PhyPA: Phylogenetic method with pairwise sequence alignment outperforms likelihood methods in phylogenetics involving highly diverged sequences.

    PubMed

    Xia, Xuhua

    2016-09-01

    While pairwise sequence alignment (PSA) by dynamic programming is guaranteed to generate one of the optimal alignments, multiple sequence alignment (MSA) of highly divergent sequences often results in poorly aligned sequences, plaguing all subsequent phylogenetic analysis. One way to avoid this problem is to use only PSA to reconstruct phylogenetic trees, which can only be done with distance-based methods. I compared the accuracy of this new computational approach (named PhyPA for phylogenetics by pairwise alignment) against the maximum likelihood method using MSA (the ML+MSA approach), based on nucleotide, amino acid and codon sequences simulated with different topologies and tree lengths. I present a surprising discovery that the fast PhyPA method consistently outperforms the slow ML+MSA approach for highly diverged sequences even when all optimization options were turned on for the ML+MSA approach. Only when sequences are not highly diverged (i.e., when a reliable MSA can be obtained) does the ML+MSA approach outperforms PhyPA. The true topologies are always recovered by ML with the true alignment from the simulation. However, with MSA derived from alignment programs such as MAFFT or MUSCLE, the recovered topology consistently has higher likelihood than that for the true topology. Thus, the failure to recover the true topology by the ML+MSA is not because of insufficient search of tree space, but by the distortion of phylogenetic signal by MSA methods. I have implemented in DAMBE PhyPA and two approaches making use of multi-gene data sets to derive phylogenetic support for subtrees equivalent to resampling techniques such as bootstrapping and jackknifing. PMID:27377322

  16. An Evolved Wavelet Library Based on Genetic Algorithm

    PubMed Central

    Vaithiyanathan, D.; Seshasayanan, R.; Kunaraj, K.; Keerthiga, J.

    2014-01-01

    As the size of the images being captured increases, there is a need for a robust algorithm for image compression which satiates the bandwidth limitation of the transmitted channels and preserves the image resolution without considerable loss in the image quality. Many conventional image compression algorithms use wavelet transform which can significantly reduce the number of bits needed to represent a pixel and the process of quantization and thresholding further increases the compression. In this paper the authors evolve two sets of wavelet filter coefficients using genetic algorithm (GA), one for the whole image portion except the edge areas and the other for the portions near the edges in the image (i.e., global and local filters). Images are initially separated into several groups based on their frequency content, edges, and textures and the wavelet filter coefficients are evolved separately for each group. As there is a possibility of the GA settling in local maximum, we introduce a new shuffling operator to prevent the GA from this effect. The GA used to evolve filter coefficients primarily focuses on maximizing the peak signal to noise ratio (PSNR). The evolved filter coefficients by the proposed method outperform the existing methods by a 0.31 dB improvement in the average PSNR and a 0.39 dB improvement in the maximum PSNR. PMID:25405225

  17. Sorting on STAR. [CDC computer algorithm timing comparison

    NASA Technical Reports Server (NTRS)

    Stone, H. S.

    1978-01-01

    Timing comparisons are given for three sorting algorithms written for the CDC STAR computer. One algorithm is Hoare's (1962) Quicksort, which is the fastest or nearly the fastest sorting algorithm for most computers. A second algorithm is a vector version of Quicksort that takes advantage of the STAR's vector operations. The third algorithm is an adaptation of Batcher's (1968) sorting algorithm, which makes especially good use of vector operations but has a complexity of N(log N)-squared as compared with a complexity of N log N for the Quicksort algorithms. In spite of its worse complexity, Batcher's sorting algorithm is competitive with the serial version of Quicksort for vectors up to the largest that can be treated by STAR. Vector Quicksort outperforms the other two algorithms and is generally preferred. These results indicate that unusual instruction sets can introduce biases in program execution time that counter results predicted by worst-case asymptotic complexity analysis.

  18. Mutation-Based Artificial Fish Swarm Algorithm for Bound Constrained Global Optimization

    NASA Astrophysics Data System (ADS)

    Rocha, Ana Maria A. C.; Fernandes, Edite M. G. P.

    2011-09-01

    The herein presented mutation-based artificial fish swarm (AFS) algorithm includes mutation operators to prevent the algorithm to falling into local solutions, diversifying the search, and to accelerate convergence to the global optima. Three mutation strategies are introduced into the AFS algorithm to define the trial points that emerge from random, leaping and searching behaviors. Computational results show that the new algorithm outperforms other well-known global stochastic solution methods.

  19. Why envy outperforms admiration.

    PubMed

    van de Ven, Niels; Zeelenberg, Marcel; Pieters, Rik

    2011-06-01

    Four studies tested the hypothesis that the emotion of benign envy, but not the emotions of admiration or malicious envy, motivates people to improve themselves. Studies 1 to 3 found that only benign envy was related to the motivation to study more (Study 1) and to actual performance on the Remote Associates Task (which measures intelligence and creativity; Studies 2 and 3). Study 4 found that an upward social comparison triggered benign envy and subsequent better performance only when people thought self-improvement was attainable. When participants thought self-improvement was hard, an upward social comparison led to more admiration and no motivation to do better. Implications of these findings for theories of social emotions such as envy, social comparisons, and for understanding the influence of role models are discussed. PMID:21383070

  20. MEDUSAHEAD OUTPERFORMS SQUIRRETAIL

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Understanding the ecological processes fostering invasion and dominance by medusahead is central to its management. The objectives of this study were 1) to quantify and compare interference between medusahead and squirreltail under different concentrations of soil N and P and 2) to compare growth r...

  1. Genetic algorithms

    NASA Technical Reports Server (NTRS)

    Wang, Lui; Bayer, Steven E.

    1991-01-01

    Genetic algorithms are mathematical, highly parallel, adaptive search procedures (i.e., problem solving methods) based loosely on the processes of natural genetics and Darwinian survival of the fittest. Basic genetic algorithms concepts are introduced, genetic algorithm applications are introduced, and results are presented from a project to develop a software tool that will enable the widespread use of genetic algorithm technology.

  2. Performance evaluation of operational atmospheric correction algorithms over the East China Seas

    NASA Astrophysics Data System (ADS)

    He, Shuangyan; He, Mingxia; Fischer, Jürgen

    2016-04-01

    To acquire high-quality operational data products for Chinese in-orbit and scheduled ocean color sensors, the performances of two operational atmospheric correction (AC) algorithms (ESA MEGS 7.4.1 and NASA SeaDAS 6.1) were evaluated over the East China Seas (ECS) using MERIS data. The spectral remote sensing reflectance R rs(λ), aerosol optical thickness (AOT), and Ångström exponent (α) retrieved using the two algorithms were validated using in situ measurements obtained between May 2002 and October 2009. Match-ups of R rs, AOT, and α between the in situ and MERIS data were obtained through strict exclusion criteria. Statistical analysis of R rs(λ) showed a mean percentage difference (MPD) of 9%-13% in the 490-560 nm spectral range, and significant overestimation was observed at 413 nm (MPD>72%). The AOTs were overestimated (MPD>32%), and although the ESA algorithm outperformed the NASA algorithm in the blue-green bands, the situation was reversed in the red-near-infrared bands. The value of α was obviously underestimated by the ESA algorithm (MPD=41%) but not by the NASA algorithm (MPD=35%). To clarify why the NASA algorithm performed better in the retrieval of α, scatter plots of the α single scattering albedo (SSA) density were prepared. These α-SSA density scatter plots showed that the applicability of the aerosol models used by the NASA algorithm over the ECS is better than that used by the ESA algorithm, although neither aerosol model is suitable for the ECS region. The results of this study provide a reference to both data users and data agencies regarding the use of operational data products and the investigation into the improvement of current AC schemes over the ECS.

  3. Evaluation of dynamically dimensioned search algorithm for optimizing SWAT by altering sampling distributions and searching range

    Technology Transfer Automated Retrieval System (TEKTRAN)

    The primary advantage of Dynamically Dimensioned Search algorithm (DDS) is that it outperforms many other optimization techniques in both convergence speed and the ability in searching for parameter sets that satisfy statistical guidelines while requiring only one algorithm parameter (perturbation f...

  4. Production scheduling and rescheduling with genetic algorithms.

    PubMed

    Bierwirth, C; Mattfeld, D C

    1999-01-01

    A general model for job shop scheduling is described which applies to static, dynamic and non-deterministic production environments. Next, a Genetic Algorithm is presented which solves the job shop scheduling problem. This algorithm is tested in a dynamic environment under different workload situations. Thereby, a highly efficient decoding procedure is proposed which strongly improves the quality of schedules. Finally, this technique is tested for scheduling and rescheduling in a non-deterministic environment. It is shown by experiment that conventional methods of production control are clearly outperformed at reasonable run-time costs. PMID:10199993

  5. Quantitative comparison of FBP, EM, and Bayesian reconstruction algorithms for the IndyPET scanner.

    PubMed

    Frese, Thomas; Rouze, Ned C; Bouman, Charles A; Sauer, Ken; Hutchins, Gary D

    2003-02-01

    We quantitatively compare filtered backprojection (FBP), expectation-maximization (EM), and Bayesian reconstruction algorithms as applied to the IndyPET scanner--a dedicated research scanner which has been developed for small and intermediate field of view imaging applications. In contrast to previous approaches that rely on Monte Carlo simulations, a key feature of our investigation is the use of an empirical system kernel determined from scans of line source phantoms. This kernel is incorporated into the forward model of the EM and Bayesian algorithms to achieve resolution recovery. Three data sets are used, data collected on the IndyPET scanner using a bar phantom and a Hoffman three-dimensional brain phantom, and simulated data containing a hot lesion added to a uniform background. Reconstruction quality is analyzed quantitatively in terms of bias-variance measures (bar phantom) and mean square error (lesion phantom). We observe that without use of the empirical system kernel, the FBP, EM, and Bayesian algorithms give similar performance. However, with the inclusion of the empirical kernel, the iterative algorithms provide superior reconstructions compared with FBP, both in terms of visual quality and quantitative measures. Furthermore, Bayesian methods outperform EM. We conclude that significant improvements in reconstruction quality can be realized by combining accurate models of the system response with Bayesian reconstruction algorithms. PMID:12716002

  6. Maximal sum of metabolic exchange fluxes outperforms biomass yield as a predictor of growth rate of microorganisms.

    PubMed

    Zarecki, Raphy; Oberhardt, Matthew A; Yizhak, Keren; Wagner, Allon; Shtifman Segal, Ella; Freilich, Shiri; Henry, Christopher S; Gophna, Uri; Ruppin, Eytan

    2014-01-01

    Growth rate has long been considered one of the most valuable phenotypes that can be measured in cells. Aside from being highly accessible and informative in laboratory cultures, maximal growth rate is often a prime determinant of cellular fitness, and predicting phenotypes that underlie fitness is key to both understanding and manipulating life. Despite this, current methods for predicting microbial fitness typically focus on yields [e.g., predictions of biomass yield using GEnome-scale metabolic Models (GEMs)] or notably require many empirical kinetic constants or substrate uptake rates, which render these methods ineffective in cases where fitness derives most directly from growth rate. Here we present a new method for predicting cellular growth rate, termed SUMEX, which does not require any empirical variables apart from a metabolic network (i.e., a GEM) and the growth medium. SUMEX is calculated by maximizing the SUM of molar EXchange fluxes (hence SUMEX) in a genome-scale metabolic model. SUMEX successfully predicts relative microbial growth rates across species, environments, and genetic conditions, outperforming traditional cellular objectives (most notably, the convention assuming biomass maximization). The success of SUMEX suggests that the ability of a cell to catabolize substrates and produce a strong proton gradient enables fast cell growth. Easily applicable heuristics for predicting growth rate, such as what we demonstrate with SUMEX, may contribute to numerous medical and biotechnological goals, ranging from the engineering of faster-growing industrial strains, modeling of mixed ecological communities, and the inhibition of cancer growth. PMID:24866123

  7. Current composite-feature classification methods do not outperform simple single-genes classifiers in breast cancer prognosis

    PubMed Central

    Staiger, Christine; Cadot, Sidney; Györffy, Balázs; Wessels, Lodewyk F. A.; Klau, Gunnar W.

    2013-01-01

    Integrating gene expression data with secondary data such as pathway or protein-protein interaction data has been proposed as a promising approach for improved outcome prediction of cancer patients. Methods employing this approach usually aggregate the expression of genes into new composite features, while the secondary data guide this aggregation. Previous studies were limited to few data sets with a small number of patients. Moreover, each study used different data and evaluation procedures. This makes it difficult to objectively assess the gain in classification performance. Here we introduce the Amsterdam Classification Evaluation Suite (ACES). ACES is a Python package to objectively evaluate classification and feature-selection methods and contains methods for pooling and normalizing Affymetrix microarrays from different studies. It is simple to use and therefore facilitates the comparison of new approaches to best-in-class approaches. In addition to the methods described in our earlier study (Staiger et al., 2012), we have included two prominent prognostic gene signatures specific for breast cancer outcome, one more composite feature selection method and two network-based gene ranking methods. Employing the evaluation pipeline we show that current composite-feature classification methods do not outperform simple single-genes classifiers in predicting outcome in breast cancer. Furthermore, we find that also the stability of features across different data sets is not higher for composite features. Most stunningly, we observe that prediction performances are not affected when extracting features from randomized PPI networks. PMID:24391662

  8. A hybrid multi-objective particle swarm algorithm for a mixed-model assembly line sequencing problem

    NASA Astrophysics Data System (ADS)

    Rahimi-Vahed, A. R.; Mirghorbani, S. M.; Rabbani, M.

    2007-12-01

    Mixed-model assembly line sequencing is one of the most important strategic problems in the field of production management where diversified customers' demands exist. In this article, three major goals are considered: (i) total utility work, (ii) total production rate variation and (iii) total setup cost. Due to the complexity of the problem, a hybrid multi-objective algorithm based on particle swarm optimization (PSO) and tabu search (TS) is devised to obtain the locally Pareto-optimal frontier where simultaneous minimization of the above-mentioned objectives is desired. In order to validate the performance of the proposed algorithm in terms of solution quality and diversity level, the algorithm is applied to various test problems and its reliability, based on different comparison metrics, is compared with three prominent multi-objective genetic algorithms, PS-NC GA, NSGA-II and SPEA-II. The computational results show that the proposed hybrid algorithm significantly outperforms existing genetic algorithms in large-sized problems.

  9. A new evolutionary algorithm with structure mutation for the maximum balanced biclique problem.

    PubMed

    Yuan, Bo; Li, Bin; Chen, Huanhuan; Yao, Xin

    2015-05-01

    The maximum balanced biclique problem (MBBP), an NP-hard combinatorial optimization problem, has been attracting more attention in recent years. Existing node-deletion-based algorithms usually fail to find high-quality solutions due to their easy stagnation in local optima, especially when the scale of the problem grows large. In this paper, a new algorithm for the MBBP, evolutionary algorithm with structure mutation (EA/SM), is proposed. In the EA/SM framework, local search complemented with a repair-assisted restart process is adopted. A new mutation operator, SM, is proposed to enhance the exploration during the local search process. The SM can change the structure of solutions dynamically while keeping their size (fitness) and the feasibility unchanged. It implements a kind of large mutation in the structure space of MBBP to help the algorithm escape from local optima. An MBBP-specific local search operator is designed to improve the quality of solutions efficiently; besides, a new repair-assisted restart process is introduced, in which the Marchiori's heuristic repair is modified to repair every new solution reinitialized by an estimation of distribution algorithm (EDA)-like process. The proposed algorithm is evaluated on a large set of benchmark graphs with various scales and densities. Experimental results show that: 1) EA/SM produces significantly better results than the state-of-the-art heuristic algorithms; 2) it also outperforms a repair-based EDA and a repair-based genetic algorithm on all benchmark graphs; and 3) the advantages of EA/SM are mainly due to the introduction of the new SM operator and the new repair-assisted restart process. PMID:25137737

  10. Competing Sudakov veto algorithms

    NASA Astrophysics Data System (ADS)

    Kleiss, Ronald; Verheyen, Rob

    2016-07-01

    We present a formalism to analyze the distribution produced by a Monte Carlo algorithm. We perform these analyses on several versions of the Sudakov veto algorithm, adding a cutoff, a second variable and competition between emission channels. The formal analysis allows us to prove that multiple, seemingly different competition algorithms, including those that are currently implemented in most parton showers, lead to the same result. Finally, we test their performance in a semi-realistic setting and show that there are significantly faster alternatives to the commonly used algorithms.

  11. A novel algorithm combining finite state method and genetic algorithm for solving crude oil scheduling problem.

    PubMed

    Duan, Qian-Qian; Yang, Gen-Ke; Pan, Chang-Chun

    2014-01-01

    A hybrid optimization algorithm combining finite state method (FSM) and genetic algorithm (GA) is proposed to solve the crude oil scheduling problem. The FSM and GA are combined to take the advantage of each method and compensate deficiencies of individual methods. In the proposed algorithm, the finite state method makes up for the weakness of GA which is poor at local searching ability. The heuristic returned by the FSM can guide the GA algorithm towards good solutions. The idea behind this is that we can generate promising substructure or partial solution by using FSM. Furthermore, the FSM can guarantee that the entire solution space is uniformly covered. Therefore, the combination of the two algorithms has better global performance than the existing GA or FSM which is operated individually. Finally, a real-life crude oil scheduling problem from the literature is used for conducting simulation. The experimental results validate that the proposed method outperforms the state-of-art GA method. PMID:24772031

  12. Semioptimal practicable algorithmic cooling

    SciTech Connect

    Elias, Yuval; Mor, Tal; Weinstein, Yossi

    2011-04-15

    Algorithmic cooling (AC) of spins applies entropy manipulation algorithms in open spin systems in order to cool spins far beyond Shannon's entropy bound. Algorithmic cooling of nuclear spins was demonstrated experimentally and may contribute to nuclear magnetic resonance spectroscopy. Several cooling algorithms were suggested in recent years, including practicable algorithmic cooling (PAC) and exhaustive AC. Practicable algorithms have simple implementations, yet their level of cooling is far from optimal; exhaustive algorithms, on the other hand, cool much better, and some even reach (asymptotically) an optimal level of cooling, but they are not practicable. We introduce here semioptimal practicable AC (SOPAC), wherein a few cycles (typically two to six) are performed at each recursive level. Two classes of SOPAC algorithms are proposed and analyzed. Both attain cooling levels significantly better than PAC and are much more efficient than the exhaustive algorithms. These algorithms are shown to bridge the gap between PAC and exhaustive AC. In addition, we calculated the number of spins required by SOPAC in order to purify qubits for quantum computation. As few as 12 and 7 spins are required (in an ideal scenario) to yield a mildly pure spin (60% polarized) from initial polarizations of 1% and 10%, respectively. In the latter case, about five more spins are sufficient to produce a highly pure spin (99.99% polarized), which could be relevant for fault-tolerant quantum computing.

  13. Invasive Acer negundo outperforms native species in non-limiting resource environments due to its higher phenotypic plasticity

    PubMed Central

    2011-01-01

    Background To identify the determinants of invasiveness, comparisons of traits of invasive and native species are commonly performed. Invasiveness is generally linked to higher values of reproductive, physiological and growth-related traits of the invasives relative to the natives in the introduced range. Phenotypic plasticity of these traits has also been cited to increase the success of invasive species but has been little studied in invasive tree species. In a greenhouse experiment, we compared ecophysiological traits between an invasive species to Europe, Acer negundo, and early- and late-successional co-occurring native species, under different light, nutrient availability and disturbance regimes. We also compared species of the same species groups in situ, in riparian forests. Results Under non-limiting resources, A. negundo seedlings showed higher growth rates than the native species. However, A. negundo displayed equivalent or lower photosynthetic capacities and nitrogen content per unit leaf area compared to the native species; these findings were observed both on the seedlings in the greenhouse experiment and on adult trees in situ. These physiological traits were mostly conservative along the different light, nutrient and disturbance environments. Overall, under non-limiting light and nutrient conditions, specific leaf area and total leaf area of A. negundo were substantially larger. The invasive species presented a higher plasticity in allocation to foliage and therefore in growth with increasing nutrient and light availability relative to the native species. Conclusions The higher level of plasticity of the invasive species in foliage allocation in response to light and nutrient availability induced a better growth in non-limiting resource environments. These results give us more elements on the invasiveness of A. negundo and suggest that such behaviour could explain the ability of A. negundo to outperform native tree species, contributes to its spread

  14. Contrast Enhancement Algorithm Based on Gap Adjustment for Histogram Equalization.

    PubMed

    Chiu, Chung-Cheng; Ting, Chih-Chung

    2016-01-01

    Image enhancement methods have been widely used to improve the visual effects of images. Owing to its simplicity and effectiveness histogram equalization (HE) is one of the methods used for enhancing image contrast. However, HE may result in over-enhancement and feature loss problems that lead to unnatural look and loss of details in the processed images. Researchers have proposed various HE-based methods to solve the over-enhancement problem; however, they have largely ignored the feature loss problem. Therefore, a contrast enhancement algorithm based on gap adjustment for histogram equalization (CegaHE) is proposed. It refers to a visual contrast enhancement algorithm based on histogram equalization (VCEA), which generates visually pleasing enhanced images, and improves the enhancement effects of VCEA. CegaHE adjusts the gaps between two gray values based on the adjustment equation, which takes the properties of human visual perception into consideration, to solve the over-enhancement problem. Besides, it also alleviates the feature loss problem and further enhances the textures in the dark regions of the images to improve the quality of the processed images for human visual perception. Experimental results demonstrate that CegaHE is a reliable method for contrast enhancement and that it significantly outperforms VCEA and other methods. PMID:27338412

  15. CACONET: Ant Colony Optimization (ACO) Based Clustering Algorithm for VANET

    PubMed Central

    Bajwa, Khalid Bashir; Khan, Salabat; Chaudary, Nadeem Majeed; Akram, Adeel

    2016-01-01

    A vehicular ad hoc network (VANET) is a wirelessly connected network of vehicular nodes. A number of techniques, such as message ferrying, data aggregation, and vehicular node clustering aim to improve communication efficiency in VANETs. Cluster heads (CHs), selected in the process of clustering, manage inter-cluster and intra-cluster communication. The lifetime of clusters and number of CHs determines the efficiency of network. In this paper a Clustering algorithm based on Ant Colony Optimization (ACO) for VANETs (CACONET) is proposed. CACONET forms optimized clusters for robust communication. CACONET is compared empirically with state-of-the-art baseline techniques like Multi-Objective Particle Swarm Optimization (MOPSO) and Comprehensive Learning Particle Swarm Optimization (CLPSO). Experiments varying the grid size of the network, the transmission range of nodes, and number of nodes in the network were performed to evaluate the comparative effectiveness of these algorithms. For optimized clustering, the parameters considered are the transmission range, direction and speed of the nodes. The results indicate that CACONET significantly outperforms MOPSO and CLPSO. PMID:27149517

  16. Contrast Enhancement Algorithm Based on Gap Adjustment for Histogram Equalization

    PubMed Central

    Chiu, Chung-Cheng; Ting, Chih-Chung

    2016-01-01

    Image enhancement methods have been widely used to improve the visual effects of images. Owing to its simplicity and effectiveness histogram equalization (HE) is one of the methods used for enhancing image contrast. However, HE may result in over-enhancement and feature loss problems that lead to unnatural look and loss of details in the processed images. Researchers have proposed various HE-based methods to solve the over-enhancement problem; however, they have largely ignored the feature loss problem. Therefore, a contrast enhancement algorithm based on gap adjustment for histogram equalization (CegaHE) is proposed. It refers to a visual contrast enhancement algorithm based on histogram equalization (VCEA), which generates visually pleasing enhanced images, and improves the enhancement effects of VCEA. CegaHE adjusts the gaps between two gray values based on the adjustment equation, which takes the properties of human visual perception into consideration, to solve the over-enhancement problem. Besides, it also alleviates the feature loss problem and further enhances the textures in the dark regions of the images to improve the quality of the processed images for human visual perception. Experimental results demonstrate that CegaHE is a reliable method for contrast enhancement and that it significantly outperforms VCEA and other methods. PMID:27338412

  17. Enhanced Landweber algorithm via Bregman iterations for bioluminescence tomography

    NASA Astrophysics Data System (ADS)

    Xia, Yi; Zhang, Meng

    2014-09-01

    Bioluminescence tomography (BLT) is an important optical molecular imaging modality aimed at visualizing physiological and pathological processes at cellular and molecular levels. While the forward process of light propagation is described by the diffusion approximation to radiative transfer equation, BLT is the inverse problem to reconstruct the 3D localization and quantification of internal bioluminescent sources distribution. Due to the inherent ill-posedness of the BLT problem, regularization is generally indispensable to obtain more favorable reconstruction. In particular, total variation (TV) regularization is known to be effective for piecewise-constant source distribution which can permit sharp discontinuities and preserve edges. However, total variation regularization generally suffers from the unsatisfactory staircasing effect. In this work, we introduce the Bregman iterative regularization to alleviate this degeneration and enhance the numerical reconstruction of BLT. Based on the existing Landweber method (LM), we put forward the Bregman-LM-TV algorithm for BLT. Numerical experiments are carried out and preliminary simulation results are reported to evaluate the proposed algorithms. It is found that Bregman-LM-TV can significantly outperform the individual Landweber method for BLT when the source distribution is piecewise-constant.

  18. CACONET: Ant Colony Optimization (ACO) Based Clustering Algorithm for VANET.

    PubMed

    Aadil, Farhan; Bajwa, Khalid Bashir; Khan, Salabat; Chaudary, Nadeem Majeed; Akram, Adeel

    2016-01-01

    A vehicular ad hoc network (VANET) is a wirelessly connected network of vehicular nodes. A number of techniques, such as message ferrying, data aggregation, and vehicular node clustering aim to improve communication efficiency in VANETs. Cluster heads (CHs), selected in the process of clustering, manage inter-cluster and intra-cluster communication. The lifetime of clusters and number of CHs determines the efficiency of network. In this paper a Clustering algorithm based on Ant Colony Optimization (ACO) for VANETs (CACONET) is proposed. CACONET forms optimized clusters for robust communication. CACONET is compared empirically with state-of-the-art baseline techniques like Multi-Objective Particle Swarm Optimization (MOPSO) and Comprehensive Learning Particle Swarm Optimization (CLPSO). Experiments varying the grid size of the network, the transmission range of nodes, and number of nodes in the network were performed to evaluate the comparative effectiveness of these algorithms. For optimized clustering, the parameters considered are the transmission range, direction and speed of the nodes. The results indicate that CACONET significantly outperforms MOPSO and CLPSO. PMID:27149517

  19. Estimating meme fitness in adaptive memetic algorithms for combinatorial problems.

    PubMed

    Smith, J E

    2012-01-01

    Among the most promising and active research areas in heuristic optimisation is the field of adaptive memetic algorithms (AMAs). These gain much of their reported robustness by adapting the probability with which each of a set of local improvement operators is applied, according to an estimate of their current value to the search process. This paper addresses the issue of how the current value should be estimated. Assuming the estimate occurs over several applications of a meme, we consider whether the extreme or mean improvements should be used, and whether this aggregation should be global, or local to some part of the solution space. To investigate these issues, we use the well-established COMA framework that coevolves the specification of a population of memes (representing different local search algorithms) alongside a population of candidate solutions to the problem at hand. Two very different memetic algorithms are considered: the first using adaptive operator pursuit to adjust the probabilities of applying a fixed set of memes, and a second which applies genetic operators to dynamically adapt and create memes and their functional definitions. For the latter, especially on combinatorial problems, credit assignment mechanisms based on historical records, or on notions of landscape locality, will have limited application, and it is necessary to estimate the value of a meme via some form of sampling. The results on a set of binary encoded combinatorial problems show that both methods are very effective, and that for some problems it is necessary to use thousands of variables in order to tease apart the differences between different reward schemes. However, for both memetic algorithms, a significant pattern emerges that reward based on mean improvement is better than that based on extreme improvement. This contradicts recent findings from adapting the parameters of operators involved in global evolutionary search. The results also show that local reward schemes

  20. Sensitivity Analysis of the Scattering-Based SARBM3D Despeckling Algorithm.

    PubMed

    Di Simone, Alessio

    2016-01-01

    Synthetic Aperture Radar (SAR) imagery greatly suffers from multiplicative speckle noise, typical of coherent image acquisition sensors, such as SAR systems. Therefore, a proper and accurate despeckling preprocessing step is almost mandatory to aid the interpretation and processing of SAR data by human users and computer algorithms, respectively. Very recently, a scattering-oriented version of the popular SAR Block-Matching 3D (SARBM3D) despeckling filter, named Scattering-Based (SB)-SARBM3D, was proposed. The new filter is based on the a priori knowledge of the local topography of the scene. In this paper, an experimental sensitivity analysis of the above-mentioned despeckling algorithm is carried out, and the main results are shown and discussed. In particular, the role of both electromagnetic and geometrical parameters of the surface and the impact of its scattering behavior are investigated. Furthermore, a comprehensive sensitivity analysis of the SB-SARBM3D filter against the Digital Elevation Model (DEM) resolution and the SAR image-DEM coregistration step is also provided. The sensitivity analysis shows a significant robustness of the algorithm against most of the surface parameters, while the DEM resolution plays a key role in the despeckling process. Furthermore, the SB-SARBM3D algorithm outperforms the original SARBM3D in the presence of the most realistic scattering behaviors of the surface. An actual scenario is also presented to assess the DEM role in real-life conditions. PMID:27347971

  1. Sensitivity Analysis of the Scattering-Based SARBM3D Despeckling Algorithm

    PubMed Central

    Di Simone, Alessio

    2016-01-01

    Synthetic Aperture Radar (SAR) imagery greatly suffers from multiplicative speckle noise, typical of coherent image acquisition sensors, such as SAR systems. Therefore, a proper and accurate despeckling preprocessing step is almost mandatory to aid the interpretation and processing of SAR data by human users and computer algorithms, respectively. Very recently, a scattering-oriented version of the popular SAR Block-Matching 3D (SARBM3D) despeckling filter, named Scattering-Based (SB)-SARBM3D, was proposed. The new filter is based on the a priori knowledge of the local topography of the scene. In this paper, an experimental sensitivity analysis of the above-mentioned despeckling algorithm is carried out, and the main results are shown and discussed. In particular, the role of both electromagnetic and geometrical parameters of the surface and the impact of its scattering behavior are investigated. Furthermore, a comprehensive sensitivity analysis of the SB-SARBM3D filter against the Digital Elevation Model (DEM) resolution and the SAR image-DEM coregistration step is also provided. The sensitivity analysis shows a significant robustness of the algorithm against most of the surface parameters, while the DEM resolution plays a key role in the despeckling process. Furthermore, the SB-SARBM3D algorithm outperforms the original SARBM3D in the presence of the most realistic scattering behaviors of the surface. An actual scenario is also presented to assess the DEM role in real-life conditions. PMID:27347971

  2. A pruning-based disk scheduling algorithm for heterogeneous I/O workloads.

    PubMed

    Kim, Taeseok; Bahn, Hyokyung; Won, Youjip

    2014-01-01

    In heterogeneous I/O workload environments, disk scheduling algorithms should support different QoS (Quality-of-Service) for each I/O request. For example, the algorithm should meet the deadlines of real-time requests and at the same time provide reasonable response time for best-effort requests. This paper presents a novel disk scheduling algorithm called G-SCAN (Grouping-SCAN) for handling heterogeneous I/O workloads. To find a schedule that satisfies the deadline constraints and seek time minimization simultaneously, G-SCAN maintains a series of candidate schedules and expands the schedules whenever a new request arrives. Maintaining these candidate schedules requires excessive spatial and temporal overhead, but G-SCAN reduces the overhead to a manageable level via pruning the state space using two heuristics. One is grouping that clusters adjacent best-effort requests into a single scheduling unit and the other is the branch-and-bound strategy that cuts off inefficient or impractical schedules. Experiments with various synthetic and real-world I/O workloads show that G-SCAN outperforms existing disk scheduling algorithms significantly in terms of the average response time, throughput, and QoS-guarantees for heterogeneous I/O workloads. We also show that the overhead of G-SCAN is reasonable for on-line execution. PMID:24782678

  3. A novel neural-inspired learning algorithm with application to clinical risk prediction.

    PubMed

    Tay, Darwin; Poh, Chueh Loo; Kitney, Richard I

    2015-04-01

    Clinical risk prediction - the estimation of the likelihood an individual is at risk of a disease - is a coveted and exigent clinical task, and a cornerstone to the recommendation of life saving management strategies. This is especially important for individuals at risk of cardiovascular disease (CVD) given the fact that it is the leading causes of death in many developed counties. To this end, we introduce a novel learning algorithm - a key factor that influences the performance of machine learning-based prediction models - and utilities it to develop CVD risk prediction tool. This novel neural-inspired algorithm, called the Artificial Neural Cell System for classification (ANCSc), is inspired by mechanisms that develop the brain and empowering it with capabilities such as information processing/storage and recall, decision making and initiating actions on external environment. Specifically, we exploit on 3 natural neural mechanisms responsible for developing and enriching the brain - namely neurogenesis, neuroplasticity via nurturing and apoptosis - when implementing ANCSc algorithm. Benchmark testing was conducted using the Honolulu Heart Program (HHP) dataset and results are juxtaposed with 2 other algorithms - i.e. Support Vector Machine (SVM) and Evolutionary Data-Conscious Artificial Immune Recognition System (EDC-AIRS). Empirical experiments indicate that ANCSc algorithm (statistically) outperforms both SVM and EDC-AIRS algorithms. Key clinical markers identified by ANCSc algorithm include risk factors related to diet/lifestyle, pulmonary function, personal/family/medical history, blood data, blood pressure, and electrocardiography. These clinical markers, in general, are also found to be clinically significant - providing a promising avenue for identifying potential cardiovascular risk factors to be evaluated in clinical trials. PMID:25576352

  4. Performance comparison of neural network training algorithms in modeling of bimodal drug delivery.

    PubMed

    Ghaffari, A; Abdollahi, H; Khoshayand, M R; Bozchalooi, I Soltani; Dadgar, A; Rafiee-Tehrani, M

    2006-12-11

    The major aim of this study was to model the effect of two causal factors, i.e. coating weight gain and amount of pectin-chitosan in the coating solution on the in vitro release profile of theophylline for bimodal drug delivery. Artificial neural network (ANN) as a multilayer perceptron feedforward network was incorporated for developing a predictive model of the formulations. Five different training algorithms belonging to three classes: gradient descent, quasi-Newton (Levenberg-Marquardt, LM) and genetic algorithm (GA) were used to train ANN containing a single hidden layer of four nodes. The next objective of the current study was to compare the performance of aforementioned algorithms with regard to predicting ability. The ANNs were trained with those algorithms using the available experimental data as the training set. The divergence of the RMSE between the output and target values of test set was monitored and used as a criterion to stop training. Two versions of gradient descent backpropagation algorithms, i.e. incremental backpropagation (IBP) and batch backpropagation (BBP) outperformed the others. No significant differences were found between the predictive abilities of IBP and BBP, although, the convergence speed of BBP is three- to four-fold higher than IBP. Although, both gradient descent backpropagation and LM methodologies gave comparable results for the data modeling, training of ANNs with genetic algorithm was erratic. The precision of predictive ability was measured for each training algorithm and their performances were in the order of: IBP, BBP>LM>QP (quick propagation)>GA. According to BBP-ANN implementation, an increase in coating levels and a decrease in the amount of pectin-chitosan generally retarded the drug release. Moreover, the latter causal factor namely the amount of pectin-chitosan played slightly more dominant role in determination of the dissolution profiles. PMID:16959449

  5. Parallel processors and nonlinear structural dynamics algorithms and software

    NASA Technical Reports Server (NTRS)

    Belytschko, Ted; Gilbertsen, Noreen D.; Neal, Mark O.; Plaskacz, Edward J.

    1989-01-01

    The adaptation of a finite element program with explicit time integration to a massively parallel SIMD (single instruction multiple data) computer, the CONNECTION Machine is described. The adaptation required the development of a new algorithm, called the exchange algorithm, in which all nodal variables are allocated to the element with an exchange of nodal forces at each time step. The architectural and C* programming language features of the CONNECTION Machine are also summarized. Various alternate data structures and associated algorithms for nonlinear finite element analysis are discussed and compared. Results are presented which demonstrate that the CONNECTION Machine is capable of outperforming the CRAY XMP/14.

  6. An efficient cuckoo search algorithm for numerical function optimization

    NASA Astrophysics Data System (ADS)

    Ong, Pauline; Zainuddin, Zarita

    2013-04-01

    Cuckoo search algorithm which reproduces the breeding strategy of the best known brood parasitic bird, the cuckoos has demonstrated its superiority in obtaining the global solution for numerical optimization problems. However, the involvement of fixed step approach in its exploration and exploitation behavior might slow down the search process considerably. In this regards, an improved cuckoo search algorithm with adaptive step size adjustment is introduced and its feasibility on a variety of benchmarks is validated. The obtained results show that the proposed scheme outperforms the standard cuckoo search algorithm in terms of convergence characteristic while preserving the fascinating features of the original method.

  7. Efficient sequential and parallel algorithms for record linkage

    PubMed Central

    Mamun, Abdullah-Al; Mi, Tian; Aseltine, Robert; Rajasekaran, Sanguthevar

    2014-01-01

    Background and objective Integrating data from multiple sources is a crucial and challenging problem. Even though there exist numerous algorithms for record linkage or deduplication, they suffer from either large time needs or restrictions on the number of datasets that they can integrate. In this paper we report efficient sequential and parallel algorithms for record linkage which handle any number of datasets and outperform previous algorithms. Methods Our algorithms employ hierarchical clustering algorithms as the basis. A key idea that we use is radix sorting on certain attributes to eliminate identical records before any further processing. Another novel idea is to form a graph that links similar records and find the connected components. Results Our sequential and parallel algorithms have been tested on a real dataset of 1 083 878 records and synthetic datasets ranging in size from 50 000 to 9 000 000 records. Our sequential algorithm runs at least two times faster, for any dataset, than the previous best-known algorithm, the two-phase algorithm using faster computation of the edit distance (TPA (FCED)). The speedups obtained by our parallel algorithm are almost linear. For example, we get a speedup of 7.5 with 8 cores (residing in a single node), 14.1 with 16 cores (residing in two nodes), and 26.4 with 32 cores (residing in four nodes). Conclusions We have compared the performance of our sequential algorithm with TPA (FCED) and found that our algorithm outperforms the previous one. The accuracy is the same as that of this previous best-known algorithm. PMID:24154837

  8. Recent ATR and fusion algorithm improvements for multiband sonar imagery

    NASA Astrophysics Data System (ADS)

    Aridgides, Tom; Fernández, Manuel

    2009-05-01

    An improved automatic target recognition processing string has been developed. The overall processing string consists of pre-processing, subimage adaptive clutter filtering, normalization, detection, data regularization, feature extraction, optimal subset feature selection, feature orthogonalization and classification processing blocks. The objects that are classified by the 3 distinct ATR strings are fused using the classification confidence values and their expansions as features, and using "summing" or log-likelihood-ratio-test (LLRT) based fusion rules. The utility of the overall processing strings and their fusion was demonstrated with new high-resolution three-frequency band sonar imagery. The ATR processing strings were individually tuned to the corresponding three-frequency band data, making use of the new processing improvement, data regularization; this improvement entails computing the input data mean, clipping the data to a multiple of its mean and scaling it, prior to feature extraction and resulted in a 3:1 reduction in false alarms. Two significant fusion algorithm improvements were made. First, a nonlinear exponential Box-Cox expansion (consisting of raising data to a to-be-determined power) feature LLRT fusion algorithm was developed. Second, a repeated application of a subset Box-Cox feature selection / feature orthogonalization / LLRT fusion block was utilized. It was shown that cascaded Box-Cox feature LLRT fusion of the ATR processing strings outperforms baseline "summing" and single-stage Box-Cox feature LLRT algorithms, yielding significant improvements over the best single ATR processing string results, and providing the capability to correctly call the majority of targets while maintaining a very low false alarm rate.

  9. Robustness of Tree Extraction Algorithms from LIDAR

    NASA Astrophysics Data System (ADS)

    Dumitru, M.; Strimbu, B. M.

    2015-12-01

    Forest inventory faces a new era as unmanned aerial systems (UAS) increased the precision of measurements, while reduced field effort and price of data acquisition. A large number of algorithms were developed to identify various forest attributes from UAS data. The objective of the present research is to assess the robustness of two types of tree identification algorithms when UAS data are combined with digital elevation models (DEM). The algorithms use as input photogrammetric point cloud, which are subsequent rasterized. The first type of algorithms associate tree crown with an inversed watershed (subsequently referred as watershed based), while the second type is based on simultaneous representation of tree crown as an individual entity, and its relation with neighboring crowns (subsequently referred as simultaneous representation). A DJI equipped with a SONY a5100 was used to acquire images over an area from center Louisiana. The images were processed with Pix4D, and a photogrammetric point cloud with 50 points / m2 was attained. DEM was obtained from a flight executed in 2013, which also supplied a LIDAR point cloud with 30 points/m2. The algorithms were tested on two plantations with different species and crown class complexities: one homogeneous (i.e., a mature loblolly pine plantation), and one heterogeneous (i.e., an unmanaged uneven-aged stand with mixed species pine -hardwoods). Tree identification on photogrammetric point cloud reveled that simultaneous representation algorithm outperforms watershed algorithm, irrespective stand complexity. Watershed algorithm exhibits robustness to parameters, but the results were worse than majority sets of parameters needed by the simultaneous representation algorithm. The simultaneous representation algorithm is a better alternative to watershed algorithm even when parameters are not accurately estimated. Similar results were obtained when the two algorithms were run on the LIDAR point cloud.

  10. Quantum algorithms

    NASA Astrophysics Data System (ADS)

    Abrams, Daniel S.

    This thesis describes several new quantum algorithms. These include a polynomial time algorithm that uses a quantum fast Fourier transform to find eigenvalues and eigenvectors of a Hamiltonian operator, and that can be applied in cases (commonly found in ab initio physics and chemistry problems) for which all known classical algorithms require exponential time. Fast algorithms for simulating many body Fermi systems are also provided in both first and second quantized descriptions. An efficient quantum algorithm for anti-symmetrization is given as well as a detailed discussion of a simulation of the Hubbard model. In addition, quantum algorithms that calculate numerical integrals and various characteristics of stochastic processes are described. Two techniques are given, both of which obtain an exponential speed increase in comparison to the fastest known classical deterministic algorithms and a quadratic speed increase in comparison to classical Monte Carlo (probabilistic) methods. I derive a simpler and slightly faster version of Grover's mean algorithm, show how to apply quantum counting to the problem, develop some variations of these algorithms, and show how both (apparently distinct) approaches can be understood from the same unified framework. Finally, the relationship between physics and computation is explored in some more depth, and it is shown that computational complexity theory depends very sensitively on physical laws. In particular, it is shown that nonlinear quantum mechanics allows for the polynomial time solution of NP-complete and #P oracle problems. Using the Weinberg model as a simple example, the explicit construction of the necessary gates is derived from the underlying physics. Nonlinear quantum algorithms are also presented using Polchinski type nonlinearities which do not allow for superluminal communication. (Copies available exclusively from MIT Libraries, Rm. 14- 0551, Cambridge, MA 02139-4307. Ph. 617-253-5668; Fax 617-253-1690.)

  11. Electricity Load Forecasting Using Support Vector Regression with Memetic Algorithms

    PubMed Central

    Hu, Zhongyi; Xiong, Tao

    2013-01-01

    Electricity load forecasting is an important issue that is widely explored and examined in power systems operation literature and commercial transactions in electricity markets literature as well. Among the existing forecasting models, support vector regression (SVR) has gained much attention. Considering the performance of SVR highly depends on its parameters; this study proposed a firefly algorithm (FA) based memetic algorithm (FA-MA) to appropriately determine the parameters of SVR forecasting model. In the proposed FA-MA algorithm, the FA algorithm is applied to explore the solution space, and the pattern search is used to conduct individual learning and thus enhance the exploitation of FA. Experimental results confirm that the proposed FA-MA based SVR model can not only yield more accurate forecasting results than the other four evolutionary algorithms based SVR models and three well-known forecasting models but also outperform the hybrid algorithms in the related existing literature. PMID:24459425

  12. Improved hybrid optimization algorithm for 3D protein structure prediction.

    PubMed

    Zhou, Changjun; Hou, Caixia; Wei, Xiaopeng; Zhang, Qiang

    2014-07-01

    A new improved hybrid optimization algorithm - PGATS algorithm, which is based on toy off-lattice model, is presented for dealing with three-dimensional protein structure prediction problems. The algorithm combines the particle swarm optimization (PSO), genetic algorithm (GA), and tabu search (TS) algorithms. Otherwise, we also take some different improved strategies. The factor of stochastic disturbance is joined in the particle swarm optimization to improve the search ability; the operations of crossover and mutation that are in the genetic algorithm are changed to a kind of random liner method; at last tabu search algorithm is improved by appending a mutation operator. Through the combination of a variety of strategies and algorithms, the protein structure prediction (PSP) in a 3D off-lattice model is achieved. The PSP problem is an NP-hard problem, but the problem can be attributed to a global optimization problem of multi-extremum and multi-parameters. This is the theoretical principle of the hybrid optimization algorithm that is proposed in this paper. The algorithm combines local search and global search, which overcomes the shortcoming of a single algorithm, giving full play to the advantage of each algorithm. In the current universal standard sequences, Fibonacci sequences and real protein sequences are certified. Experiments show that the proposed new method outperforms single algorithms on the accuracy of calculating the protein sequence energy value, which is proved to be an effective way to predict the structure of proteins. PMID:25069136

  13. Fast ordering algorithm for exact histogram specification.

    PubMed

    Nikolova, Mila; Steidl, Gabriele

    2014-12-01

    This paper provides a fast algorithm to order in a meaningful, strict way the integer gray values in digital (quantized) images. It can be used in any exact histogram specification-based application. Our algorithm relies on the ordering procedure based on the specialized variational approach. This variational method was shown to be superior to all other state-of-the art ordering algorithms in terms of faithful total strict ordering but not in speed. Indeed, the relevant functionals are in general difficult to minimize because their gradient is nearly flat over vast regions. In this paper, we propose a simple and fast fixed point algorithm to minimize these functionals. The fast convergence of our algorithm results from known analytical properties of the model. Our algorithm is equivalent to an iterative nonlinear filtering. Furthermore, we show that a particular form of the variational model gives rise to much faster convergence than other alternative forms. We demonstrate that only a few iterations of this filter yield almost the same pixel ordering as the minimizer. Thus, we apply only few iteration steps to obtain images, whose pixels can be ordered in a strict and faithful way. Numerical experiments confirm that our algorithm outperforms by far its main competitors. PMID:25347881

  14. Haplotyping algorithms

    SciTech Connect

    Sobel, E.; Lange, K.; O`Connell, J.R.

    1996-12-31

    Haplotyping is the logical process of inferring gene flow in a pedigree based on phenotyping results at a small number of genetic loci. This paper formalizes the haplotyping problem and suggests four algorithms for haplotype reconstruction. These algorithms range from exhaustive enumeration of all haplotype vectors to combinatorial optimization by simulated annealing. Application of the algorithms to published genetic analyses shows that manual haplotyping is often erroneous. Haplotyping is employed in screening pedigrees for phenotyping errors and in positional cloning of disease genes from conserved haplotypes in population isolates. 26 refs., 6 figs., 3 tabs.

  15. Joint optimization of algorithmic suites for EEG analysis.

    PubMed

    Santana, Eder; Brockmeier, Austin J; Principe, Jose C

    2014-01-01

    Electroencephalogram (EEG) data analysis algorithms consist of multiple processing steps each with a number of free parameters. A joint optimization methodology can be used as a wrapper to fine-tune these parameters for the patient or application. This approach is inspired by deep learning neural network models, but differs because the processing layers for EEG are heterogeneous with different approaches used for processing space and time. Nonetheless, we treat the processing stages as a neural network and apply backpropagation to jointly optimize the parameters. This approach outperforms previous results on the BCI Competition II - dataset IV; additionally, it outperforms the common spatial patterns (CSP) algorithm on the BCI Competition III dataset IV. In addition, the optimized parameters in the architecture are still interpretable. PMID:25570621

  16. Automatic design of decision-tree induction algorithms tailored to flexible-receptor docking data

    PubMed Central

    2012-01-01

    Background This paper addresses the prediction of the free energy of binding of a drug candidate with enzyme InhA associated with Mycobacterium tuberculosis. This problem is found within rational drug design, where interactions between drug candidates and target proteins are verified through molecular docking simulations. In this application, it is important not only to correctly predict the free energy of binding, but also to provide a comprehensible model that could be validated by a domain specialist. Decision-tree induction algorithms have been successfully used in drug-design related applications, specially considering that decision trees are simple to understand, interpret, and validate. There are several decision-tree induction algorithms available for general-use, but each one has a bias that makes it more suitable for a particular data distribution. In this article, we propose and investigate the automatic design of decision-tree induction algorithms tailored to particular drug-enzyme binding data sets. We investigate the performance of our new method for evaluating binding conformations of different drug candidates to InhA, and we analyze our findings with respect to decision tree accuracy, comprehensibility, and biological relevance. Results The empirical analysis indicates that our method is capable of automatically generating decision-tree induction algorithms that significantly outperform the traditional C4.5 algorithm with respect to both accuracy and comprehensibility. In addition, we provide the biological interpretation of the rules generated by our approach, reinforcing the importance of comprehensible predictive models in this particular bioinformatics application. Conclusions We conclude that automatically designing a decision-tree algorithm tailored to molecular docking data is a promising alternative for the prediction of the free energy from the binding of a drug candidate with a flexible-receptor. PMID:23171000

  17. Sequence-Specific Copolymer Compatibilizers designed via a Genetic Algorithm

    NASA Astrophysics Data System (ADS)

    Meenakshisundaram, Venkatesh; Patra, Tarak; Hung, Jui-Hsiang; Simmons, David

    For several decades, block copolymers have been employed as surfactants to reduce interfacial energy for applications from emulsification to surface adhesion. While the simplest approach employs symmetric diblocks, studies have examined asymmetric diblocks, multiblock copolymers, gradient copolymers, and copolymer-grafted nanoparticles. However, there exists no established approach to determining the optimal copolymer compatibilizer sequence for a given application. Here we employ molecular dynamics simulations within a genetic algorithm to identify copolymer surfactant sequences yielding maximum reductions the interfacial energy of model immiscible polymers. The optimal copolymer sequence depends significantly on surfactant concentration. Most surprisingly, at high surface concentrations, where the surfactant achieves the greatest interfacial energy reduction, specific non-periodic sequences are found to significantly outperform any regularly blocky sequence. This emergence of polymer sequence-specificity within a non-sequenced environment adds to a recent body of work suggesting that specific sequence may have the potential to play a greater role in polymer properties than previously understood. We acknowledge the W. M. Keck Foundation for financial support of this research.

  18. Improved multiprocessor garbage collection algorithms

    SciTech Connect

    Newman, I.A.; Stallard, R.P.; Woodward, M.C.

    1983-01-01

    Outlines the results of an investigation of existing multiprocessor garbage collection algorithms and introduces two new algorithms which significantly improve some aspects of the performance of their predecessors. The two algorithms arise from different starting assumptions. One considers the case where the algorithm will terminate successfully whatever list structure is being processed and assumes that the extra data space should be minimised. The other seeks a very fast garbage collection time for list structures that do not contain loops. Results of both theoretical and experimental investigations are given to demonstrate the efficacy of the algorithms. 7 references.

  19. The delay multiply and sum beamforming algorithm in ultrasound B-mode medical imaging.

    PubMed

    Matrone, Giulia; Savoia, Alessandro Stuart; Caliano, Giosue; Magenes, Giovanni

    2015-04-01

    Most of ultrasound medical imaging systems currently on the market implement standard Delay and Sum (DAS) beamforming to form B-mode images. However, image resolution and contrast achievable with DAS are limited by the aperture size and by the operating frequency. For this reason, different beamformers have been presented in the literature that are mainly based on adaptive algorithms, which allow achieving higher performance at the cost of an increased computational complexity. In this paper, we propose the use of an alternative nonlinear beamforming algorithm for medical ultrasound imaging, which is called Delay Multiply and Sum (DMAS) and that was originally conceived for a RADAR microwave system for breast cancer detection. We modify the DMAS beamformer and test its performance on both simulated and experimentally collected linear-scan data, by comparing the Point Spread Functions, beampatterns, synthetic phantom and in vivo carotid artery images obtained with standard DAS and with the proposed algorithm. Results show that the DMAS beamformer outperforms DAS in both simulated and experimental trials and that the main improvement brought about by this new method is a significantly higher contrast resolution (i.e., narrower main lobe and lower side lobes), which turns out into an increased dynamic range and better quality of B-mode images. PMID:25420256

  20. A modified genetic algorithm with fuzzy roulette wheel selection for job-shop scheduling problems

    NASA Astrophysics Data System (ADS)

    Thammano, Arit; Teekeng, Wannaporn

    2015-05-01

    The job-shop scheduling problem is one of the most difficult production planning problems. Since it is in the NP-hard class, a recent trend in solving the job-shop scheduling problem is shifting towards the use of heuristic and metaheuristic algorithms. This paper proposes a novel metaheuristic algorithm, which is a modification of the genetic algorithm. This proposed algorithm introduces two new concepts to the standard genetic algorithm: (1) fuzzy roulette wheel selection and (2) the mutation operation with tabu list. The proposed algorithm has been evaluated and compared with several state-of-the-art algorithms in the literature. The experimental results on 53 JSSPs show that the proposed algorithm is very effective in solving the combinatorial optimization problems. It outperforms all state-of-the-art algorithms on all benchmark problems in terms of the ability to achieve the optimal solution and the computational time.

  1. An efficient Earth Mover's Distance algorithm for robust histogram comparison.

    PubMed

    Ling, Haibin; Okada, Kazunori

    2007-05-01

    We propose EMD-L1: a fast and exact algorithm for computing the Earth Mover's Distance (EMD) between a pair of histograms. The efficiency of the new algorithm enables its application to problems that were previously prohibitive due to high time complexities. The proposed EMD-L1 significantly simplifies the original linear programming formulation of EMD. Exploiting the L1 metric structure, the number of unknown variables in EMD-L1 is reduced to O(N) from O(N2) of the original EMD for a histogram with N bins. In addition, the number of constraints is reduced by half and the objective function of the linear program is simplified. Formally, without any approximation, we prove that the EMD-L1 formulation is equivalent to the original EMD with a L1 ground distance. To perform the EMD-L1 computation, we propose an efficient tree-based algorithm, Tree-EMD. Tree-EMD exploits the fact that a basic feasible solution of the simplex algorithm-based solver forms a spanning tree when we interpret EMD-L1 as a network flow optimization problem. We empirically show that this new algorithm has an average time complexity of O(N2), which significantly improves the best reported supercubic complexity of the original EMD. The accuracy of the proposed methods is evaluated by experiments for two computation-intensive problems: shape recognition and interest point matching using multidimensional histogram-based local features. For shape recognition, EMD-L1 is applied to compare shape contexts on the widely tested MPEG7 shape data set, as well as an articulated shape data set. For interest point matching, SIFT, shape context and spin image are tested on both synthetic and real image pairs with large geometrical deformation, illumination change, and heavy intensity noise. The results demonstrate that our EMD-L1-based solutions outperform previously reported state-of-the-art features and distance measures in solving the two tasks. PMID:17356203

  2. Exploration of new multivariate spectral calibration algorithms.

    SciTech Connect

    Van Benthem, Mark Hilary; Haaland, David Michael; Melgaard, David Kennett; Martin, Laura Elizabeth; Wehlburg, Christine Marie; Pell, Randy J.; Guenard, Robert D.

    2004-03-01

    A variety of multivariate calibration algorithms for quantitative spectral analyses were investigated and compared, and new algorithms were developed in the course of this Laboratory Directed Research and Development project. We were able to demonstrate the ability of the hybrid classical least squares/partial least squares (CLSIPLS) calibration algorithms to maintain calibrations in the presence of spectrometer drift and to transfer calibrations between spectrometers from the same or different manufacturers. These methods were found to be as good or better in prediction ability as the commonly used partial least squares (PLS) method. We also present the theory for an entirely new class of algorithms labeled augmented classical least squares (ACLS) methods. New factor selection methods are developed and described for the ACLS algorithms. These factor selection methods are demonstrated using near-infrared spectra collected from a system of dilute aqueous solutions. The ACLS algorithm is also shown to provide improved ease of use and better prediction ability than PLS when transferring calibrations between near-infrared calibrations from the same manufacturer. Finally, simulations incorporating either ideal or realistic errors in the spectra were used to compare the prediction abilities of the new ACLS algorithm with that of PLS. We found that in the presence of realistic errors with non-uniform spectral error variance across spectral channels or with spectral errors correlated between frequency channels, ACLS methods generally out-performed the more commonly used PLS method. These results demonstrate the need for realistic error structure in simulations when the prediction abilities of various algorithms are compared. The combination of equal or superior prediction ability and the ease of use of the ACLS algorithms make the new ACLS methods the preferred algorithms to use for multivariate spectral calibrations.

  3. The Superior Lambert Algorithm

    NASA Astrophysics Data System (ADS)

    der, G.

    2011-09-01

    Lambert algorithms are used extensively for initial orbit determination, mission planning, space debris correlation, and missile targeting, just to name a few applications. Due to the significance of the Lambert problem in Astrodynamics, Gauss, Battin, Godal, Lancaster, Gooding, Sun and many others (References 1 to 15) have provided numerous formulations leading to various analytic solutions and iterative methods. Most Lambert algorithms and their computer programs can only work within one revolution, break down or converge slowly when the transfer angle is near zero or 180 degrees, and their multi-revolution limitations are either ignored or barely addressed. Despite claims of robustness, many Lambert algorithms fail without notice, and the users seldom have a clue why. The DerAstrodynamics lambert2 algorithm, which is based on the analytic solution formulated by Sun, works for any number of revolutions and converges rapidly at any transfer angle. It provides significant capability enhancements over every other Lambert algorithm in use today. These include improved speed, accuracy, robustness, and multirevolution capabilities as well as implementation simplicity. Additionally, the lambert2 algorithm provides a powerful tool for solving the angles-only problem without artificial singularities (pointed out by Gooding in Reference 16), which involves 3 lines of sight captured by optical sensors, or systems such as the Air Force Space Surveillance System (AFSSS). The analytic solution is derived from the extended Godal’s time equation by Sun, while the iterative method of solution is that of Laguerre, modified for robustness. The Keplerian solution of a Lambert algorithm can be extended to include the non-Keplerian terms of the Vinti algorithm via a simple targeting technique (References 17 to 19). Accurate analytic non-Keplerian trajectories can be predicted for satellites and ballistic missiles, while performing at least 100 times faster in speed than most

  4. Feature Selection via Modified Gravitational Optimization Algorithm

    NASA Astrophysics Data System (ADS)

    Nabizadeh, Nooshin; John, Nigel

    2015-03-01

    Feature selection is the process of selecting a subset of relevant and most informative features, which efficiently represents the input data. We proposed a feature selection algorithm based on n-dimensional gravitational optimization algorithm (NGOA), which is based on the principle of gravitational fields. The objective function of optimization algorithm is a non-linear function of variables, which are called masses and defined based on extracted features. The forces between the masses as well as their new locations are calculated using the value of the objective function and the values of masses. We extracted variety of features applying different wavelet transforms and statistical methods on FLAIR and T1-weighted MR brain images. There are two classes of normal and abnormal tissues. Extracted features are divided into groups of five features. The best feature is selected in each group using N-dimensional gravitational optimization algorithm and support vector machine classifier. Then the selected features from each group make several groups of five features again and so on till desired number of features is selected. The advantage of NGOA algorithm is that the possibility of being drawn into a local optimal solution is very low. The experimental results show that our method outperforms some standard feature selection algorithms on both real-data and simulated brain tumor data.

  5. Predicting pupylation sites in prokaryotic proteins using semi-supervised self-training support vector machine algorithm.

    PubMed

    Ju, Zhe; Gu, Hong

    2016-08-15

    As one important post-translational modification of prokaryotic proteins, pupylation plays a key role in regulating various biological processes. The accurate identification of pupylation sites is crucial for understanding the underlying mechanisms of pupylation. Although several computational methods have been developed for the identification of pupylation sites, the prediction accuracy of them is still unsatisfactory. Here, a novel bioinformatics tool named IMP-PUP is proposed to improve the prediction of pupylation sites. IMP-PUP is constructed on the composition of k-spaced amino acid pairs and trained with a modified semi-supervised self-training support vector machine (SVM) algorithm. The proposed algorithm iteratively trains a series of support vector machine classifiers on both annotated and non-annotated pupylated proteins. Computational results show that IMP-PUP achieves the area under receiver operating characteristic curves of 0.91, 0.73, and 0.75 on our training set, Tung's testing set, and our testing set, respectively, which are better than those of the different error costs SVM algorithm and the original self-training SVM algorithm. Independent tests also show that IMP-PUP significantly outperforms three other existing pupylation site predictors: GPS-PUP, iPUP, and pbPUP. Therefore, IMP-PUP can be a useful tool for accurate prediction of pupylation sites. A MATLAB software package for IMP-PUP is available at https://juzhe1120.github.io/. PMID:27197054

  6. FHSA-SED: Two-Locus Model Detection for Genome-Wide Association Study with Harmony Search Algorithm

    PubMed Central

    Tuo, Shouheng; Zhang, Junying; Yuan, Xiguo; Zhang, Yuanyuan; Liu, Zhaowen

    2016-01-01

    Motivation Two-locus model is a typical significant disease model to be identified in genome-wide association study (GWAS). Due to intensive computational burden and diversity of disease models, existing methods have drawbacks on low detection power, high computation cost, and preference for some types of disease models. Method In this study, two scoring functions (Bayesian network based K2-score and Gini-score) are used for characterizing two SNP locus as a candidate model, the two criteria are adopted simultaneously for improving identification power and tackling the preference problem to disease models. Harmony search algorithm (HSA) is improved for quickly finding the most likely candidate models among all two-locus models, in which a local search algorithm with two-dimensional tabu table is presented to avoid repeatedly evaluating some disease models that have strong marginal effect. Finally G-test statistic is used to further test the candidate models. Results We investigate our method named FHSA-SED on 82 simulated datasets and a real AMD dataset, and compare it with two typical methods (MACOED and CSE) which have been developed recently based on swarm intelligent search algorithm. The results of simulation experiments indicate that our method outperforms the two compared algorithms in terms of detection power, computation time, evaluation times, sensitivity (TPR), specificity (SPC), positive predictive value (PPV) and accuracy (ACC). Our method has identified two SNPs (rs3775652 and rs10511467) that may be also associated with disease in AMD dataset. PMID:27014873

  7. Prediction Errors in Learning Drug Response from Gene Expression Data – Influence of Labeling, Sample Size, and Machine Learning Algorithm

    PubMed Central

    Bayer, Immanuel; Groth, Philip; Schneckener, Sebastian

    2013-01-01

    Model-based prediction is dependent on many choices ranging from the sample collection and prediction endpoint to the choice of algorithm and its parameters. Here we studied the effects of such choices, exemplified by predicting sensitivity (as IC50) of cancer cell lines towards a variety of compounds. For this, we used three independent sample collections and applied several machine learning algorithms for predicting a variety of endpoints for drug response. We compared all possible models for combinations of sample collections, algorithm, drug, and labeling to an identically generated null model. The predictability of treatment effects varies among compounds, i.e. response could be predicted for some but not for all. The choice of sample collection plays a major role towards lowering the prediction error, as does sample size. However, we found that no algorithm was able to consistently outperform the other and there was no significant difference between regression and two- or three class predictors in this experimental setting. These results indicate that response-modeling projects should direct efforts mainly towards sample collection and data quality, rather than method adjustment. PMID:23894636

  8. A hybrid artificial bee colony algorithm for numerical function optimization

    NASA Astrophysics Data System (ADS)

    Alqattan, Zakaria N.; Abdullah, Rosni

    2015-02-01

    Artificial Bee Colony (ABC) algorithm is one of the swarm intelligence algorithms; it has been introduced by Karaboga in 2005. It is a meta-heuristic optimization search algorithm inspired from the intelligent foraging behavior of the honey bees in nature. Its unique search process made it as one of the most competitive algorithm with some other search algorithms in the area of optimization, such as Genetic algorithm (GA) and Particle Swarm Optimization (PSO). However, the ABC performance of the local search process and the bee movement or the solution improvement equation still has some weaknesses. The ABC is good in avoiding trapping at the local optimum but it spends its time searching around unpromising random selected solutions. Inspired by the PSO, we propose a Hybrid Particle-movement ABC algorithm called HPABC, which adapts the particle movement process to improve the exploration of the original ABC algorithm. Numerical benchmark functions were used in order to experimentally test the HPABC algorithm. The results illustrate that the HPABC algorithm can outperform the ABC algorithm in most of the experiments (75% better in accuracy and over 3 times faster).

  9. Kernel simplex growing algorithm for hyperspectral endmember extraction

    NASA Astrophysics Data System (ADS)

    Zhao, Liaoying; Zheng, Junpeng; Li, Xiaorun; Wang, Lijiao

    2014-01-01

    In order to effectively extract endmembers for hyperspectral imagery where linear mixing model may not be appropriate due to multiple scattering effects, this paper extends the simplex growing algorithm (SGA) to its kernel version. A new simplex volume formula without dimension reduction is used in SGA to form a new simplex growing algorithm (NSGA). The original data are nonlinearly mapped into a high-dimensional space where the scatters can be ignored. To avoid determining complex nonlinear mapping, a kernel function is used to extend the NSGA to kernel NSGA (KNSGA). Experimental results of simulated and real data prove that the proposed KNSGA approach outperforms SGA and NSGA.

  10. Brief Cognitive-Behavioral Depression Prevention Program for High-Risk Adolescents Outperforms Two Alternative Interventions: A Randomized Efficacy Trial

    ERIC Educational Resources Information Center

    Stice, Eric; Rohde, Paul; Seeley, John R.; Gau, Jeff M.

    2008-01-01

    In this depression prevention trial, 341 high-risk adolescents (mean age = 15.6 years, SD = 1.2) with elevated depressive symptoms were randomized to a brief group cognitive-behavioral (CB) intervention, group supportive-expressive intervention, bibliotherapy, or assessment-only control condition. CB participants showed significantly greater…

  11. I.Q.: New Research Shows That the Japanese Outperform All Others in Intelligence Tests. Are They Really Smarter?

    ERIC Educational Resources Information Center

    Mohs, Mayo

    1982-01-01

    New research shows that Japanese achieved significantly higher average IQ scores than did their American counterparts. These results provide the focus of a discussion on the nature/nurture controversey, validity of using IQ scores in comparing mental capacity of races and nationality groups, and other factors related to intelligence testing. (JN)

  12. Algorithm development

    NASA Technical Reports Server (NTRS)

    Barth, Timothy J.; Lomax, Harvard

    1987-01-01

    The past decade has seen considerable activity in algorithm development for the Navier-Stokes equations. This has resulted in a wide variety of useful new techniques. Some examples for the numerical solution of the Navier-Stokes equations are presented, divided into two parts. One is devoted to the incompressible Navier-Stokes equations, and the other to the compressible form.

  13. A scalable and practical one-pass clustering algorithm for recommender system

    NASA Astrophysics Data System (ADS)

    Khalid, Asra; Ghazanfar, Mustansar Ali; Azam, Awais; Alahmari, Saad Ali

    2015-12-01

    KMeans clustering-based recommendation algorithms have been proposed claiming to increase the scalability of recommender systems. One potential drawback of these algorithms is that they perform training offline and hence cannot accommodate the incremental updates with the arrival of new data, making them unsuitable for the dynamic environments. From this line of research, a new clustering algorithm called One-Pass is proposed, which is a simple, fast, and accurate. We show empirically that the proposed algorithm outperforms K-Means in terms of recommendation and training time while maintaining a good level of accuracy.

  14. Lévy flight artificial bee colony algorithm

    NASA Astrophysics Data System (ADS)

    Sharma, Harish; Bansal, Jagdish Chand; Arya, K. V.; Yang, Xin-She

    2016-08-01

    Artificial bee colony (ABC) optimisation algorithm is a relatively simple and recent population-based probabilistic approach for global optimisation. The solution search equation of ABC is significantly influenced by a random quantity which helps in exploration at the cost of exploitation of the search space. In the ABC, there is a high chance to skip the true solution due to its large step sizes. In order to balance between diversity and convergence in the ABC, a Lévy flight inspired search strategy is proposed and integrated with ABC. The proposed strategy is named as Lévy Flight ABC (LFABC) has both the local and global search capability simultaneously and can be achieved by tuning the Lévy flight parameters and thus automatically tuning the step sizes. In the LFABC, new solutions are generated around the best solution and it helps to enhance the exploitation capability of ABC. Furthermore, to improve the exploration capability, the numbers of scout bees are increased. The experiments on 20 test problems of different complexities and five real-world engineering optimisation problems show that the proposed strategy outperforms the basic ABC and recent variants of ABC, namely, Gbest-guided ABC, best-so-far ABC and modified ABC in most of the experiments.

  15. A wavelet packet adaptive filtering algorithm for enhancing manatee vocalizations.

    PubMed

    Gur, M Berke; Niezrecki, Christopher

    2011-04-01

    Approximately a quarter of all West Indian manatee (Trichechus manatus latirostris) mortalities are attributed to collisions with watercraft. A boater warning system based on the passive acoustic detection of manatee vocalizations is one possible solution to reduce manatee-watercraft collisions. The success of such a warning system depends on effective enhancement of the vocalization signals in the presence of high levels of background noise, in particular, noise emitted from watercraft. Recent research has indicated that wavelet domain pre-processing of the noisy vocalizations is capable of significantly improving the detection ranges of passive acoustic vocalization detectors. In this paper, an adaptive denoising procedure, implemented on the wavelet packet transform coefficients obtained from the noisy vocalization signals, is investigated. The proposed denoising algorithm is shown to improve the manatee detection ranges by a factor ranging from two (minimum) to sixteen (maximum) compared to high-pass filtering alone, when evaluated using real manatee vocalization and background noise signals of varying signal-to-noise ratios (SNR). Furthermore, the proposed method is also shown to outperform a previously suggested feedback adaptive line enhancer (FALE) filter on average 3.4 dB in terms of noise suppression and 0.6 dB in terms of waveform preservation. PMID:21476661

  16. Diamond-like carbon coatings enhance scratch resistance of bearing surfaces for use in joint arthroplasty: hard substrates outperform soft.

    PubMed

    Roy, Marcel E; Whiteside, Leo A; Katerberg, Brian J

    2009-05-01

    The purpose of this study was to test the hypotheses that diamond-like carbon (DLC) coatings will enhance the scratch resistance of a bearing surface in joint arthroplasty, and that a hard ceramic substrate will further enhance scratch resistance by reducing plastic deformation. We tested these hypotheses by applying a hard DLC coating to medical-grade cobalt chromium alloy (CoCr) and magnesia-stabilized zirconia (Mg-PSZ) femoral heads and performing scratch tests to determine the loads required to cause cohesive and adhesive fracture of the coating. Scratch tracks of DLC-coated and noncoated heads were then scanned by optical profilometry to determine scratch depth, width, and pile-up (raised edges), as measures of susceptibility to scratching. DLC-coated CoCr specimens exhibited cohesive coating fracture as wedge spallation at an average load of 9.74 N, whereas DLC-coated Mg-PSZ exhibited cohesive fracture as arc-tensile cracks and chipping at a significantly higher average load of 41.3 N (p < 0.0001). At adhesive coating fracture, DLC-CoCr delaminated at an average load of 35.2 N, whereas DLC-Mg-PSZ fractured by recovery spallation at a significantly higher average load of 46.8 N (p < 0.05). Both DLC-CoCr and DLC-Mg-PSZ specimens exhibited significantly shallower scratches and less pile-up than did uncoated specimens (p < 0.005 and p < 0.01, respectively). However, the harder ceramic substrate of DLC-Mg-PSZ better resisted plastic deformation, requiring significantly higher loads for cohesive and adhesive coating fracture. These findings supported both of our hypotheses. PMID:18985791

  17. Magnetic resonance imaging - ultrasound fusion targeted biopsy outperforms standard approaches in detecting prostate cancer: A meta-analysis

    PubMed Central

    Jiang, Xuping; Zhang, Jiayi; Tang, Jingyuan; Xu, Zhen; Zhang, Wei; Zhang, Qing; Guo, Hongqian; Zhou, Weimin

    2016-01-01

    The aim of the present study was to determine whether magnetic resonance imaging - ultrasound (MRI-US) fusion prostate biopsy is superior to systematic biopsy for making a definitive diagnosis of prostate cancer. The two strategies were also compared regarding their ability to detect clinically significant and insignificant prostate cancer. A literature search was conducted through the PubMed, EMBASE and China National Knowledge Infrastructure databases using appropriate search terms. A total of 3,415 cases from 21 studies were included in the present meta-analysis. Data were expressed as relative risk (RR) and 95% confidence interval. The results revealed that MRI-US fusion biopsy achieved a higher rate of overall prostate cancer detection compared with systematic biopsy (RR=1.09; P=0.047). Moreover, MRI-US fusion biopsy detected more clinically significant cancers compared with systematic biopsy (RR=1.22; P<0.01). It is therefore recommended that multi-parametric MRI-US is performed in men suspected of having prostate cancer to optimize the detection of clinically significant disease, while reducing the burden of biopsies. PMID:27446568

  18. Human ESC-Derived MSCs Outperform Bone Marrow MSCs in the Treatment of an EAE Model of Multiple Sclerosis

    PubMed Central

    Wang, Xiaofang; Kimbrel, Erin A.; Ijichi, Kumiko; Paul, Debayon; Lazorchak, Adam S.; Chu, Jianlin; Kouris, Nicholas A.; Yavanian, Gregory J.; Lu, Shi-Jiang; Pachter, Joel S.; Crocker, Stephen J.; Lanza, Robert; Xu, Ren-He

    2014-01-01

    Summary Current therapies for multiple sclerosis (MS) are largely palliative, not curative. Mesenchymal stem cells (MSCs) harbor regenerative and immunosuppressive functions, indicating a potential therapy for MS, yet the variability and low potency of MSCs from adult sources hinder their therapeutic potential. MSCs derived from human embryonic stem cells (hES-MSCs) may be better suited for clinical treatment of MS because of their unlimited and stable supply. Here, we show that hES-MSCs significantly reduce clinical symptoms and prevent neuronal demyelination in a mouse experimental autoimmune encephalitis (EAE) model of MS, and that the EAE disease-modifying effect of hES-MSCs is significantly greater than that of human bone-marrow-derived MSCs (BM-MSCs). Our evidence also suggests that increased IL-6 expression by BM-MSCs contributes to the reduced anti-EAE therapeutic activity of these cells. A distinct ability to extravasate and migrate into inflamed CNS tissues may also be associated with the robust therapeutic effects of hES-MSCs on EAE. PMID:25068126

  19. Efficient Record Linkage Algorithms Using Complete Linkage Clustering

    PubMed Central

    Mamun, Abdullah-Al; Aseltine, Robert; Rajasekaran, Sanguthevar

    2016-01-01

    Data from different agencies share data of the same individuals. Linking these datasets to identify all the records belonging to the same individuals is a crucial and challenging problem, especially given the large volumes of data. A large number of available algorithms for record linkage are prone to either time inefficiency or low-accuracy in finding matches and non-matches among the records. In this paper we propose efficient as well as reliable sequential and parallel algorithms for the record linkage problem employing hierarchical clustering methods. We employ complete linkage hierarchical clustering algorithms to address this problem. In addition to hierarchical clustering, we also use two other techniques: elimination of duplicate records and blocking. Our algorithms use sorting as a sub-routine to identify identical copies of records. We have tested our algorithms on datasets with millions of synthetic records. Experimental results show that our algorithms achieve nearly 100% accuracy. Parallel implementations achieve almost linear speedups. Time complexities of these algorithms do not exceed those of previous best-known algorithms. Our proposed algorithms outperform previous best-known algorithms in terms of accuracy consuming reasonable run times. PMID:27124604

  20. Receiver Diversity Combining Using Evolutionary Algorithms in Rayleigh Fading Channel

    PubMed Central

    Akbari, Mohsen; Manesh, Mohsen Riahi

    2014-01-01

    In diversity combining at the receiver, the output signal-to-noise ratio (SNR) is often maximized by using the maximal ratio combining (MRC) provided that the channel is perfectly estimated at the receiver. However, channel estimation is rarely perfect in practice, which results in deteriorating the system performance. In this paper, an imperialistic competitive algorithm (ICA) is proposed and compared with two other evolutionary based algorithms, namely, particle swarm optimization (PSO) and genetic algorithm (GA), for diversity combining of signals travelling across the imperfect channels. The proposed algorithm adjusts the combiner weights of the received signal components in such a way that maximizes the SNR and minimizes the bit error rate (BER). The results indicate that the proposed method eliminates the need of channel estimation and can outperform the conventional diversity combining methods. PMID:25045725

  1. A novel iris segmentation algorithm based on small eigenvalue analysis

    NASA Astrophysics Data System (ADS)

    Harish, B. S.; Aruna Kumar, S. V.; Guru, D. S.; Ngo, Minh Ngoc

    2015-12-01

    In this paper, a simple and robust algorithm is proposed for iris segmentation. The proposed method consists of two steps. In first step, iris and pupil is segmented using Robust Spatial Kernel FCM (RSKFCM) algorithm. RSKFCM is based on traditional Fuzzy-c-Means (FCM) algorithm, which incorporates spatial information and uses kernel metric as distance measure. In second step, small eigenvalue transformation is applied to localize iris boundary. The transformation is based on statistical and geometrical properties of the small eigenvalue of the covariance matrix of a set of edge pixels. Extensive experimentations are carried out on standard benchmark iris dataset (viz. CASIA-IrisV4 and UBIRIS.v2). We compared our proposed method with existing iris segmentation methods. Our proposed method has the least time complexity of O(n(i+p)) . The result of the experiments emphasizes that the proposed algorithm outperforms the existing iris segmentation methods.

  2. Alternating minimization algorithm for speckle reduction with a shifting technique.

    PubMed

    Woo, Hyenkyun; Yun, Sangwoon

    2012-04-01

    Speckles (multiplicative noise) in synthetic aperture radar (SAR) make it difficult to interpret the observed image. Due to the edge-preserving feature of total variation (TV), variational models with TV regularization have attracted much interest in reducing speckles. Algorithms based on the augmented Lagrangian function have been proposed to efficiently solve speckle-reduction variational models with TV regularization. However, these algorithms require inner iterations or inverses involving the Laplacian operator at each iteration. In this paper, we adapt Tseng's alternating minimization algorithm with a shifting technique to efficiently remove the speckle without any inner iterations or inverses involving the Laplacian operator. The proposed method is very simple and highly parallelizable; therefore, it is very efficient to despeckle huge-size SAR images. Numerical results show that our proposed method outperforms the state-of-the-art algorithms for speckle-reduction variational models with a TV regularizer in terms of central-processing-unit time. PMID:22106149

  3. Linear antenna array optimization using flower pollination algorithm.

    PubMed

    Saxena, Prerna; Kothari, Ashwin

    2016-01-01

    Flower pollination algorithm (FPA) is a new nature-inspired evolutionary algorithm used to solve multi-objective optimization problems. The aim of this paper is to introduce FPA to the electromagnetics and antenna community for the optimization of linear antenna arrays. FPA is applied for the first time to linear array so as to obtain optimized antenna positions in order to achieve an array pattern with minimum side lobe level along with placement of deep nulls in desired directions. Various design examples are presented that illustrate the use of FPA for linear antenna array optimization, and subsequently the results are validated by benchmarking along with results obtained using other state-of-the-art, nature-inspired evolutionary algorithms such as particle swarm optimization, ant colony optimization and cat swarm optimization. The results suggest that in most cases, FPA outperforms the other evolutionary algorithms and at times it yields a similar performance. PMID:27066339

  4. Temperature Corrected Bootstrap Algorithm

    NASA Technical Reports Server (NTRS)

    Comiso, Joey C.; Zwally, H. Jay

    1997-01-01

    A temperature corrected Bootstrap Algorithm has been developed using Nimbus-7 Scanning Multichannel Microwave Radiometer data in preparation to the upcoming AMSR instrument aboard ADEOS and EOS-PM. The procedure first calculates the effective surface emissivity using emissivities of ice and water at 6 GHz and a mixing formulation that utilizes ice concentrations derived using the current Bootstrap algorithm but using brightness temperatures from 6 GHz and 37 GHz channels. These effective emissivities are then used to calculate surface ice which in turn are used to convert the 18 GHz and 37 GHz brightness temperatures to emissivities. Ice concentrations are then derived using the same technique as with the Bootstrap algorithm but using emissivities instead of brightness temperatures. The results show significant improvement in the area where ice temperature is expected to vary considerably such as near the continental areas in the Antarctic, where the ice temperature is colder than average, and in marginal ice zones.

  5. Bayesian regression models outperform partial least squares methods for predicting milk components and technological properties using infrared spectral data.

    PubMed

    Ferragina, A; de los Campos, G; Vazquez, A I; Cecchinato, A; Bittante, G

    2015-11-01

    The aim of this study was to assess the performance of Bayesian models commonly used for genomic selection to predict "difficult-to-predict" dairy traits, such as milk fatty acid (FA) expressed as percentage of total fatty acids, and technological properties, such as fresh cheese yield and protein recovery, using Fourier-transform infrared (FTIR) spectral data. Our main hypothesis was that Bayesian models that can estimate shrinkage and perform variable selection may improve our ability to predict FA traits and technological traits above and beyond what can be achieved using the current calibration models (e.g., partial least squares, PLS). To this end, we assessed a series of Bayesian methods and compared their prediction performance with that of PLS. The comparison between models was done using the same sets of data (i.e., same samples, same variability, same spectral treatment) for each trait. Data consisted of 1,264 individual milk samples collected from Brown Swiss cows for which gas chromatographic FA composition, milk coagulation properties, and cheese-yield traits were available. For each sample, 2 spectra in the infrared region from 5,011 to 925 cm(-1) were available and averaged before data analysis. Three Bayesian models: Bayesian ridge regression (Bayes RR), Bayes A, and Bayes B, and 2 reference models: PLS and modified PLS (MPLS) procedures, were used to calibrate equations for each of the traits. The Bayesian models used were implemented in the R package BGLR (http://cran.r-project.org/web/packages/BGLR/index.html), whereas the PLS and MPLS were those implemented in the WinISI II software (Infrasoft International LLC, State College, PA). Prediction accuracy was estimated for each trait and model using 25 replicates of a training-testing validation procedure. Compared with PLS, which is currently the most widely used calibration method, MPLS and the 3 Bayesian methods showed significantly greater prediction accuracy. Accuracy increased in moving from

  6. Cuckoo search epistasis: a new method for exploring significant genetic interactions.

    PubMed

    Aflakparast, M; Salimi, H; Gerami, A; Dubé, M-P; Visweswaran, S; Masoudi-Nejad, A

    2014-06-01

    The advent of high-throughput sequencing technology has resulted in the ability to measure millions of single-nucleotide polymorphisms (SNPs) from thousands of individuals. Although these high-dimensional data have paved the way for better understanding of the genetic architecture of common diseases, they have also given rise to challenges in developing computational methods for learning epistatic relationships among genetic markers. We propose a new method, named cuckoo search epistasis (CSE) for identifying significant epistatic interactions in population-based association studies with a case-control design. This method combines a computationally efficient Bayesian scoring function with an evolutionary-based heuristic search algorithm, and can be efficiently applied to high-dimensional genome-wide SNP data. The experimental results from synthetic data sets show that CSE outperforms existing methods including multifactorial dimensionality reduction and Bayesian epistasis association mapping. In addition, on a real genome-wide data set related to Alzheimer's disease, CSE identified SNPs that are consistent with previously reported results, and show the utility of CSE for application to genome-wide data. PMID:24549111

  7. Scalable Nearest Neighbor Algorithms for High Dimensional Data.

    PubMed

    Muja, Marius; Lowe, David G

    2014-11-01

    For many computer vision and machine learning problems, large training sets are key for good performance. However, the most computationally expensive part of many computer vision and machine learning algorithms consists of finding nearest neighbor matches to high dimensional vectors that represent the training data. We propose new algorithms for approximate nearest neighbor matching and evaluate and compare them with previous algorithms. For matching high dimensional features, we find two algorithms to be the most efficient: the randomized k-d forest and a new algorithm proposed in this paper, the priority search k-means tree. We also propose a new algorithm for matching binary features by searching multiple hierarchical clustering trees and show it outperforms methods typically used in the literature. We show that the optimal nearest neighbor algorithm and its parameters depend on the data set characteristics and describe an automated configuration procedure for finding the best algorithm to search a particular data set. In order to scale to very large data sets that would otherwise not fit in the memory of a single machine, we propose a distributed nearest neighbor matching framework that can be used with any of the algorithms described in the paper. All this research has been released as an open source library called fast library for approximate nearest neighbors (FLANN), which has been incorporated into OpenCV and is now one of the most popular libraries for nearest neighbor matching. PMID:26353063

  8. Adaptive image contrast enhancement algorithm for point-based rendering

    NASA Astrophysics Data System (ADS)

    Xu, Shaoping; Liu, Xiaoping P.

    2015-03-01

    Surgical simulation is a major application in computer graphics and virtual reality, and most of the existing work indicates that interactive real-time cutting simulation of soft tissue is a fundamental but challenging research problem in virtual surgery simulation systems. More specifically, it is difficult to achieve a fast enough graphic update rate (at least 30 Hz) on commodity PC hardware by utilizing traditional triangle-based rendering algorithms. In recent years, point-based rendering (PBR) has been shown to offer the potential to outperform the traditional triangle-based rendering in speed when it is applied to highly complex soft tissue cutting models. Nevertheless, the PBR algorithms are still limited in visual quality due to inherent contrast distortion. We propose an adaptive image contrast enhancement algorithm as a postprocessing module for PBR, providing high visual rendering quality as well as acceptable rendering efficiency. Our approach is based on a perceptible image quality technique with automatic parameter selection, resulting in a visual quality comparable to existing conventional PBR algorithms. Experimental results show that our adaptive image contrast enhancement algorithm produces encouraging results both visually and numerically compared to representative algorithms, and experiments conducted on the latest hardware demonstrate that the proposed PBR framework with the postprocessing module is superior to the conventional PBR algorithm and that the proposed contrast enhancement algorithm can be utilized in (or compatible with) various variants of the conventional PBR algorithm.

  9. A new improved artificial bee colony algorithm for ship hull form optimization

    NASA Astrophysics Data System (ADS)

    Huang, Fuxin; Wang, Lijue; Yang, Chi

    2016-04-01

    The artificial bee colony (ABC) algorithm is a relatively new swarm intelligence-based optimization algorithm. Its simplicity of implementation, relatively few parameter settings and promising optimization capability make it widely used in different fields. However, it has problems of slow convergence due to its solution search equation. Here, a new solution search equation based on a combination of the elite solution pool and the block perturbation scheme is proposed to improve the performance of the algorithm. In addition, two different solution search equations are used by employed bees and onlooker bees to balance the exploration and exploitation of the algorithm. The developed algorithm is validated by a set of well-known numerical benchmark functions. It is then applied to optimize two ship hull forms with minimum resistance. The tested results show that the proposed new improved ABC algorithm can outperform the ABC algorithm in most of the tested problems.

  10. Online estimation of lower and upper bounds for heart sound boundaries in chest sound using Convex-hull algorithm.

    PubMed

    Çağlar, F; Ozbek, I Y

    2012-01-01

    Heart sound localization in chest sound is an essential part for many heart sound cancellation algorithms. The main difficulty for heart sound localization methods is the precise determination of the onset and offset boundaries of the heart sound segment. This paper presents a novel method to estimate lower and upper bounds for the onset and offset of the heart sound segment, which can be used as anchor points for more precise estimation. For this purpose, first chest sound is divided into frames and then entropy and smoothed entropy features of these frames are extracted, and used in the Convex-hull algorithm to estimate the upper and lower bounds for heart sound boundaries. The Convex-hull algorithm constructs a special type of envelope function for entropy features and if the maximal difference between the envelope function and the entropy is larger than a certain threshold, this point is considered as a heart sound bound. The results of the proposed method are compared with a baseline method which is a modified version of a well-known heart sound localization method. The results show that the proposed method outperforms the baseline method in terms of accuracy and detection error rate. Also, the experimental results show that smoothing entropy features significantly improves the performance of both baseline and proposed methods. PMID:23366867

  11. Solving large-scale real-world telecommunication problems using a grid-based genetic algorithm

    NASA Astrophysics Data System (ADS)

    Luna, Francisco; Nebro, Antonio; Alba, Enrique; Durillo, Juan

    2008-11-01

    This article analyses the use of a grid-based genetic algorithm (GrEA) to solve a real-world instance of a problem from the telecommunication domain. The problem, known as automatic frequency planning (AFP), is used in a global system for mobile communications (GSM) networks to assign a number of fixed frequencies to a set of GSM transceivers located in the antennae of a cellular phone network. Real data instances of the AFP are very difficult to solve owing to the NP-hard nature of the problem, so combining grid computing and metaheuristics turns out to be a way to provide satisfactory solutions in a reasonable amount of time. GrEA has been deployed on a grid with up to 300 processors to solve an AFP instance of 2612 transceivers. The results not only show that significant running time reductions are achieved, but that the search capability of GrEA clearly outperforms that of the equivalent non-grid algorithm.

  12. Tumor stratification by a novel graph-regularized bi-clique finding algorithm.

    PubMed

    Ahmadi Adl, Amin; Qian, Xiaoning

    2015-08-01

    Due to involved disease mechanisms, many complex diseases such as cancer, demonstrate significant heterogeneity with varying behaviors, including different survival time, treatment responses, and recurrence rates. The aim of tumor stratification is to identify disease subtypes, which is an important first step towards precision medicine. Recent advances in profiling a large number of molecular variables such as in The Cancer Genome Atlas (TCGA), have enabled researchers to implement computational methods, including traditional clustering and bi-clustering algorithms, to systematically analyze high-throughput molecular measurements to identify tumor subtypes as well as their corresponding associated biomarkers. In this study we discuss critical issues and challenges in existing computational approaches for tumor stratification. We show that the problem can be formulated as finding densely connected sub-graphs (bi-cliques) in a bipartite graph representation of genomic data. We propose a novel algorithm that takes advantage of prior biology knowledge through a gene-gene interaction network to find such sub-graphs, which helps simultaneously identify both tumor subtypes and their corresponding genetic markers. Our experimental results show that our proposed method outperforms current state-of-the-art methods for tumor stratification. PMID:25791318

  13. A Comparative Study of Probability Collectives Based Multi-agent Systems and Genetic Algorithms

    NASA Technical Reports Server (NTRS)

    Huang, Chien-Feng; Wolpert, David H.; Bieniawski, Stefan; Strauss, Charles E. M.

    2005-01-01

    We compare Genetic Algorithms (GA's) with Probability Collectives (PC), a new framework for distributed optimization and control. In contrast to GA's, PC-based methods do not update populations of solutions. Instead they update an explicitly parameterized probability distribution p over the space of solutions. That updating of p arises as the optimization of a functional of p. The functional is chosen so that any p that optimizes it should be p peaked about good solutions. The PC approach works in both continuous and discrete problems. It does not suffer from the resolution limitation of the finite bit length encoding of parameters into GA alleles. It also has deep connections with both game theory and statistical physics. We review the PC approach using its motivation as the information theoretic formulation of bounded rationality for multi-agent systems. It is then compared with GA's on a diverse set of problems. To handle high dimensional surfaces, in the PC method investigated here p is restricted to a product distribution. Each distribution in that product is controlled by a separate agent. The test functions were selected for their difficulty using either traditional gradient descent or genetic algorithms. On those functions the PC-based approach significantly outperforms traditional GA's in both rate of descent, trapping in false minima, and long term optimization.

  14. A multi-scale non-local means algorithm for image de-noising

    NASA Astrophysics Data System (ADS)

    Nercessian, Shahan; Panetta, Karen A.; Agaian, Sos S.

    2012-06-01

    A highly studied problem in image processing and the field of electrical engineering in general is the recovery of a true signal from its noisy version. Images can be corrupted by noise during their acquisition or transmission stages. As noisy images are visually very poor in quality, and complicate further processing stages of computer vision systems, it is imperative to develop algorithms which effectively remove noise in images. In practice, it is a difficult task to effectively remove the noise while simultaneously retaining the edge structures within the image. Accordingly, many de-noising algorithms have been considered attempt to intelligent smooth the image while still preserving its details. Recently, a non-local means (NLM) de-noising algorithm was introduced, which exploited the redundant nature of images to achieve image de-noising. The algorithm was shown to outperform current de-noising standards, including Gaussian filtering, anisotropic diffusion, total variation minimization, and multi-scale transform coefficient thresholding. However, the NLM algorithm was developed in the spatial domain, and therefore, does not leverage the benefit that multi-scale transforms provide a framework in which signals can be better distinguished by noise. Accordingly, in this paper, a multi-scale NLM (MS-NLM) algorithm is proposed, which combines the advantage of the NLM algorithm and multi-scale image processing techniques. Experimental results via computer simulations illustrate that the MS-NLM algorithm outperforms the NLM, both visually and quantitatively.

  15. Mapping Soil Properties of Africa at 250 m Resolution: Random Forests Significantly Improve Current Predictions

    PubMed Central

    Hengl, Tomislav; Heuvelink, Gerard B. M.; Kempen, Bas; Leenaars, Johan G. B.; Walsh, Markus G.; Shepherd, Keith D.; Sila, Andrew; MacMillan, Robert A.; Mendes de Jesus, Jorge; Tamene, Lulseged; Tondoh, Jérôme E.

    2015-01-01

    80% of arable land in Africa has low soil fertility and suffers from physical soil problems. Additionally, significant amounts of nutrients are lost every year due to unsustainable soil management practices. This is partially the result of insufficient use of soil management knowledge. To help bridge the soil information gap in Africa, the Africa Soil Information Service (AfSIS) project was established in 2008. Over the period 2008–2014, the AfSIS project compiled two point data sets: the Africa Soil Profiles (legacy) database and the AfSIS Sentinel Site database. These data sets contain over 28 thousand sampling locations and represent the most comprehensive soil sample data sets of the African continent to date. Utilizing these point data sets in combination with a large number of covariates, we have generated a series of spatial predictions of soil properties relevant to the agricultural management—organic carbon, pH, sand, silt and clay fractions, bulk density, cation-exchange capacity, total nitrogen, exchangeable acidity, Al content and exchangeable bases (Ca, K, Mg, Na). We specifically investigate differences between two predictive approaches: random forests and linear regression. Results of 5-fold cross-validation demonstrate that the random forests algorithm consistently outperforms the linear regression algorithm, with average decreases of 15–75% in Root Mean Squared Error (RMSE) across soil properties and depths. Fitting and running random forests models takes an order of magnitude more time and the modelling success is sensitive to artifacts in the input data, but as long as quality-controlled point data are provided, an increase in soil mapping accuracy can be expected. Results also indicate that globally predicted soil classes (USDA Soil Taxonomy, especially Alfisols and Mollisols) help improve continental scale soil property mapping, and are among the most important predictors. This indicates a promising potential for transferring pedological

  16. Mapping Soil Properties of Africa at 250 m Resolution: Random Forests Significantly Improve Current Predictions.

    PubMed

    Hengl, Tomislav; Heuvelink, Gerard B M; Kempen, Bas; Leenaars, Johan G B; Walsh, Markus G; Shepherd, Keith D; Sila, Andrew; MacMillan, Robert A; Mendes de Jesus, Jorge; Tamene, Lulseged; Tondoh, Jérôme E

    2015-01-01

    80% of arable land in Africa has low soil fertility and suffers from physical soil problems. Additionally, significant amounts of nutrients are lost every year due to unsustainable soil management practices. This is partially the result of insufficient use of soil management knowledge. To help bridge the soil information gap in Africa, the Africa Soil Information Service (AfSIS) project was established in 2008. Over the period 2008-2014, the AfSIS project compiled two point data sets: the Africa Soil Profiles (legacy) database and the AfSIS Sentinel Site database. These data sets contain over 28 thousand sampling locations and represent the most comprehensive soil sample data sets of the African continent to date. Utilizing these point data sets in combination with a large number of covariates, we have generated a series of spatial predictions of soil properties relevant to the agricultural management--organic carbon, pH, sand, silt and clay fractions, bulk density, cation-exchange capacity, total nitrogen, exchangeable acidity, Al content and exchangeable bases (Ca, K, Mg, Na). We specifically investigate differences between two predictive approaches: random forests and linear regression. Results of 5-fold cross-validation demonstrate that the random forests algorithm consistently outperforms the linear regression algorithm, with average decreases of 15-75% in Root Mean Squared Error (RMSE) across soil properties and depths. Fitting and running random forests models takes an order of magnitude more time and the modelling success is sensitive to artifacts in the input data, but as long as quality-controlled point data are provided, an increase in soil mapping accuracy can be expected. Results also indicate that globally predicted soil classes (USDA Soil Taxonomy, especially Alfisols and Mollisols) help improve continental scale soil property mapping, and are among the most important predictors. This indicates a promising potential for transferring pedological

  17. The Research of Solution to the Problems of Complex Task Scheduling Based on Self-adaptive Genetic Algorithm

    NASA Astrophysics Data System (ADS)

    Zhu, Li; He, Yongxiang; Xue, Haidong; Chen, Leichen

    Traditional genetic algorithms (GA) displays a disadvantage of early-constringency in dealing with scheduling problem. To improve the crossover operators and mutation operators self-adaptively, this paper proposes a self-adaptive GA at the target of multitask scheduling optimization under limited resources. The experiment results show that the proposed algorithm outperforms the traditional GA in evolutive ability to deal with complex task scheduling optimization.

  18. Constraint satisfaction using a hybrid evolutionary hill-climbing algorithm that performs opportunistic arc and path revision

    SciTech Connect

    Bowen, J.; Dozier, G.

    1996-12-31

    This paper introduces a hybrid evolutionary hill-climbing algorithm that quickly solves (Constraint Satisfaction Problems (CSPs)). This hybrid uses opportunistic arc and path revision in an interleaved fashion to reduce the size of the search space and to realize when to quit if a CSP is based on an inconsistent constraint network. This hybrid outperforms a well known hill-climbing algorithm, the Iterative Descent Method, on a test suite of 750 randomly generated CSPs.

  19. Statistically significant relational data mining :

    SciTech Connect

    Berry, Jonathan W.; Leung, Vitus Joseph; Phillips, Cynthia Ann; Pinar, Ali; Robinson, David Gerald; Berger-Wolf, Tanya; Bhowmick, Sanjukta; Casleton, Emily; Kaiser, Mark; Nordman, Daniel J.; Wilson, Alyson G.

    2014-02-01

    This report summarizes the work performed under the project (3z(BStatitically significant relational data mining.(3y (BThe goal of the project was to add more statistical rigor to the fairly ad hoc area of data mining on graphs. Our goal was to develop better algorithms and better ways to evaluate algorithm quality. We concetrated on algorithms for community detection, approximate pattern matching, and graph similarity measures. Approximate pattern matching involves finding an instance of a relatively small pattern, expressed with tolerance, in a large graph of data observed with uncertainty. This report gathers the abstracts and references for the eight refereed publications that have appeared as part of this work. We then archive three pieces of research that have not yet been published. The first is theoretical and experimental evidence that a popular statistical measure for comparison of community assignments favors over-resolved communities over approximations to a ground truth. The second are statistically motivated methods for measuring the quality of an approximate match of a small pattern in a large graph. The third is a new probabilistic random graph model. Statisticians favor these models for graph analysis. The new local structure graph model overcomes some of the issues with popular models such as exponential random graph models and latent variable models.

  20. A novel bee swarm optimization algorithm for numerical function optimization

    NASA Astrophysics Data System (ADS)

    Akbari, Reza; Mohammadi, Alireza; Ziarati, Koorush

    2010-10-01

    The optimization algorithms which are inspired from intelligent behavior of honey bees are among the most recently introduced population based techniques. In this paper, a novel algorithm called bee swarm optimization, or BSO, and its two extensions for improving its performance are presented. The BSO is a population based optimization technique which is inspired from foraging behavior of honey bees. The proposed approach provides different patterns which are used by the bees to adjust their flying trajectories. As the first extension, the BSO algorithm introduces different approaches such as repulsion factor and penalizing fitness (RP) to mitigate the stagnation problem. Second, to maintain efficiently the balance between exploration and exploitation, time-varying weights (TVW) are introduced into the BSO algorithm. The proposed algorithm (BSO) and its two extensions (BSO-RP and BSO-RPTVW) are compared with existing algorithms which are based on intelligent behavior of honey bees, on a set of well known numerical test functions. The experimental results show that the BSO algorithms are effective and robust; produce excellent results, and outperform other algorithms investigated in this consideration.

  1. Three hypothesis algorithm with occlusion reasoning for multiple people tracking

    NASA Astrophysics Data System (ADS)

    Reta, Carolina; Altamirano, Leopoldo; Gonzalez, Jesus A.; Medina-Carnicer, Rafael

    2015-01-01

    This work proposes a detection-based tracking algorithm able to locate and keep the identity of multiple people, who may be occluded, in uncontrolled stationary environments. Our algorithm builds a tracking graph that models spatio-temporal relationships among attributes of interacting people to predict and resolve partial and total occlusions. When a total occlusion occurs, the algorithm generates various hypotheses about the location of the occluded person considering three cases: (a) the person keeps the same direction and speed, (b) the person follows the direction and speed of the occluder, and (c) the person remains motionless during occlusion. By analyzing the graph, our algorithm can detect trajectories produced by false alarms and estimate the location of missing or occluded people. Our algorithm performs acceptably under complex conditions, such as partial visibility of individuals getting inside or outside the scene, continuous interactions and occlusions among people, wrong or missing information on the detection of persons, as well as variation of the person's appearance due to illumination changes and background-clutter distracters. Our algorithm was evaluated on test sequences in the field of intelligent surveillance achieving an overall precision of 93%. Results show that our tracking algorithm outperforms even trajectory-based state-of-the-art algorithms.

  2. An Improved Physarum polycephalum Algorithm for the Shortest Path Problem

    PubMed Central

    Wang, Qing; Adamatzky, Andrew; Chan, Felix T. S.; Mahadevan, Sankaran

    2014-01-01

    Shortest path is among classical problems of computer science. The problems are solved by hundreds of algorithms, silicon computing architectures and novel substrate, unconventional, computing devices. Acellular slime mould P. polycephalum is originally famous as a computing biological substrate due to its alleged ability to approximate shortest path from its inoculation site to a source of nutrients. Several algorithms were designed based on properties of the slime mould. Many of the Physarum-inspired algorithms suffer from a low converge speed. To accelerate the search of a solution and reduce a number of iterations we combined an original model of Physarum-inspired path solver with a new a parameter, called energy. We undertook a series of computational experiments on approximating shortest paths in networks with different topologies, and number of nodes varying from 15 to 2000. We found that the improved Physarum algorithm matches well with existing Physarum-inspired approaches yet outperforms them in number of iterations executed and a total running time. We also compare our algorithm with other existing algorithms, including the ant colony optimization algorithm and Dijkstra algorithm. PMID:24982960

  3. Generalized Pattern Search Algorithm for Peptide Structure Prediction

    PubMed Central

    Nicosia, Giuseppe; Stracquadanio, Giovanni

    2008-01-01

    Finding the near-native structure of a protein is one of the most important open problems in structural biology and biological physics. The problem becomes dramatically more difficult when a given protein has no regular secondary structure or it does not show a fold similar to structures already known. This situation occurs frequently when we need to predict the tertiary structure of small molecules, called peptides. In this research work, we propose a new ab initio algorithm, the generalized pattern search algorithm, based on the well-known class of Search-and-Poll algorithms. We performed an extensive set of simulations over a well-known set of 44 peptides to investigate the robustness and reliability of the proposed algorithm, and we compared the peptide conformation with a state-of-the-art algorithm for peptide structure prediction known as PEPstr. In particular, we tested the algorithm on the instances proposed by the originators of PEPstr, to validate the proposed algorithm; the experimental results confirm that the generalized pattern search algorithm outperforms PEPstr by 21.17% in terms of average root mean-square deviation, RMSD Cα. PMID:18487293

  4. A Hybrid Evolutionary Algorithm for Wheat Blending Problem

    PubMed Central

    Bonyadi, Mohammad Reza; Michalewicz, Zbigniew; Barone, Luigi

    2014-01-01

    This paper presents a hybrid evolutionary algorithm to deal with the wheat blending problem. The unique constraints of this problem make many existing algorithms fail: either they do not generate acceptable results or they are not able to complete optimization within the required time. The proposed algorithm starts with a filtering process that follows predefined rules to reduce the search space. Then the linear-relaxed version of the problem is solved using a standard linear programming algorithm. The result is used in conjunction with a solution generated by a heuristic method to generate an initial solution. After that, a hybrid of an evolutionary algorithm, a heuristic method, and a linear programming solver is used to improve the quality of the solution. A local search based posttuning method is also incorporated into the algorithm. The proposed algorithm has been tested on artificial test cases and also real data from past years. Results show that the algorithm is able to find quality results in all cases and outperforms the existing method in terms of both quality and speed. PMID:24707222

  5. An improved Physarum polycephalum algorithm for the shortest path problem.

    PubMed

    Zhang, Xiaoge; Wang, Qing; Adamatzky, Andrew; Chan, Felix T S; Mahadevan, Sankaran; Deng, Yong

    2014-01-01

    Shortest path is among classical problems of computer science. The problems are solved by hundreds of algorithms, silicon computing architectures and novel substrate, unconventional, computing devices. Acellular slime mould P. polycephalum is originally famous as a computing biological substrate due to its alleged ability to approximate shortest path from its inoculation site to a source of nutrients. Several algorithms were designed based on properties of the slime mould. Many of the Physarum-inspired algorithms suffer from a low converge speed. To accelerate the search of a solution and reduce a number of iterations we combined an original model of Physarum-inspired path solver with a new a parameter, called energy. We undertook a series of computational experiments on approximating shortest paths in networks with different topologies, and number of nodes varying from 15 to 2000. We found that the improved Physarum algorithm matches well with existing Physarum-inspired approaches yet outperforms them in number of iterations executed and a total running time. We also compare our algorithm with other existing algorithms, including the ant colony optimization algorithm and Dijkstra algorithm. PMID:24982960

  6. A Hybrid Genetic-Simulated Annealing Algorithm for the Location-Inventory-Routing Problem Considering Returns under E-Supply Chain Environment

    PubMed Central

    Guo, Hao; Fu, Jing

    2013-01-01

    Facility location, inventory control, and vehicle routes scheduling are critical and highly related problems in the design of logistics system for e-business. Meanwhile, the return ratio in Internet sales was significantly higher than in the traditional business. Many of returned merchandise have no quality defects, which can reenter sales channels just after a simple repackaging process. Focusing on the existing problem in e-commerce logistics system, we formulate a location-inventory-routing problem model with no quality defects returns. To solve this NP-hard problem, an effective hybrid genetic simulated annealing algorithm (HGSAA) is proposed. Results of numerical examples show that HGSAA outperforms GA on computing time, optimal solution, and computing stability. The proposed model is very useful to help managers make the right decisions under e-supply chain environment. PMID:24489489

  7. A hybrid genetic-simulated annealing algorithm for the location-inventory-routing problem considering returns under e-supply chain environment.

    PubMed

    Li, Yanhui; Guo, Hao; Wang, Lin; Fu, Jing

    2013-01-01

    Facility location, inventory control, and vehicle routes scheduling are critical and highly related problems in the design of logistics system for e-business. Meanwhile, the return ratio in Internet sales was significantly higher than in the traditional business. Many of returned merchandise have no quality defects, which can reenter sales channels just after a simple repackaging process. Focusing on the existing problem in e-commerce logistics system, we formulate a location-inventory-routing problem model with no quality defects returns. To solve this NP-hard problem, an effective hybrid genetic simulated annealing algorithm (HGSAA) is proposed. Results of numerical examples show that HGSAA outperforms GA on computing time, optimal solution, and computing stability. The proposed model is very useful to help managers make the right decisions under e-supply chain environment. PMID:24489489

  8. Enhanced ATR algorithm for high resolution multi-band sonar imagery

    NASA Astrophysics Data System (ADS)

    Aridgides, Tom; Fernández, Manuel

    2008-04-01

    An improved automatic target recognition (ATR) processing string has been developed. The overall processing string consists of pre-processing, subimage adaptive clutter filtering (SACF), normalization, detection, data regularization, feature extraction, optimal subset feature selection, feature orthogonalization and classification processing blocks. A new improvement was made to the processing string, data regularization, which entails computing the input data mean, clipping the data to a multiple of its mean and scaling it, prior to feature extraction. The classified objects of 3 distinct strings are fused using the classification confidence values and their expansions as features, and using "summing" or log-likelihood-ratio-test (LLRT) based fusion rules. The utility of the overall processing strings and their fusion was demonstrated with new high-resolution three-frequency band sonar imagery. The ATR processing strings were individually tuned to the corresponding three-frequency band data, making use of the new processing improvement, data regularization, which resulted in a 3:1 reduction in false alarms. Two significant fusion algorithm improvements were made. First, a nonlinear 2nd order (Volterra) feature LLRT fusion algorithm was developed. Second, a repeated application of a subset Volterra feature selection / feature orthogonalization / LLRT fusion block was utilized. It was shown that cascaded Volterra feature LLRT fusion of the ATR processing strings outperforms baseline summing and single-stage Volterra feature LLRT algorithms, yielding significant improvements over the best single ATR processing string results, and providing the capability to correctly call the majority of targets while maintaining a very low false alarm rate.

  9. CORDIC algorithms in four dimensions

    NASA Astrophysics Data System (ADS)

    Delosme, Jean-Marc; Hsiao, Shen-Fu

    1990-11-01

    CORDIC algorithms offer an attractive alternative to multiply-and-add based algorithms for the implementation of two-dimensional rotations preserving either norm: (x2 + 2) or (x2 _ y2)/2 Indeed these norms whose computation is a significant part of the evaluation of the two-dimensional rotations are computed much more easily by the CORDIC algorithms. However the part played by norm computations in the evaluation of rotations becomes quickly small as the dimension of the space increases. Thus in spaces of dimension 5 or more there is no practical alternative to multiply-and-add based algorithms. In the intermediate region dimensions 3 and 4 extensions of the CORDIC algorithms are an interesting option. The four-dimensional extensions are particularly elegant and are the main object of this paper.

  10. Algorithmic chemistry

    SciTech Connect

    Fontana, W.

    1990-12-13

    In this paper complex adaptive systems are defined by a self- referential loop in which objects encode functions that act back on these objects. A model for this loop is presented. It uses a simple recursive formal language, derived from the lambda-calculus, to provide a semantics that maps character strings into functions that manipulate symbols on strings. The interaction between two functions, or algorithms, is defined naturally within the language through function composition, and results in the production of a new function. An iterated map acting on sets of functions and a corresponding graph representation are defined. Their properties are useful to discuss the behavior of a fixed size ensemble of randomly interacting functions. This function gas'', or Turning gas'', is studied under various conditions, and evolves cooperative interaction patterns of considerable intricacy. These patterns adapt under the influence of perturbations consisting in the addition of new random functions to the system. Different organizations emerge depending on the availability of self-replicators.

  11. Service Discovery Framework Supported by EM Algorithm and Bayesian Classifier

    NASA Astrophysics Data System (ADS)

    Peng, Yanbin

    Service oriented computing has become the main stream research field nowadays. Meanwhile, machine learning is a promising AI technology which can enhance the performance of traditional algorithm. Therefore, aiming at solving service discovery problem, this paper imports Bayesian classifier to web service discovery framework, which can improve service querying speed. In this framework, services in service library become training set of Bayesian classifier, service query becomes a testing sample. Service matchmaking process can be executed in related service class, which has fewer services, thus can save time. Due to don't know the class of service in training set, EM algorithm is used to estimate prior probability and likelihood functions. Experiment results show that the EM algorithm and Bayesian classifier supported method outperforms other methods in time complexity.

  12. New validation algorithm for data association in SLAM.

    PubMed

    Guerra, Edmundo; Munguia, Rodrigo; Bolea, Yolanda; Grau, Antoni

    2013-09-01

    In this work, a novel data validation algorithm for a single-camera SLAM system is introduced. A 6-degree-of-freedom monocular SLAM method based on the delayed inverse-depth (DI-D) feature initialization is used as a benchmark. This SLAM methodology has been improved with the introduction of the proposed data association batch validation technique, the highest order hypothesis compatibility test, HOHCT. This new algorithm is based on the evaluation of statistically compatible hypotheses, and a search algorithm designed to exploit the characteristics of delayed inverse-depth technique. In order to show the capabilities of the proposed technique, experimental tests have been compared with classical methods. The results of the proposed technique outperformed the results of the classical approaches. PMID:23701896

  13. LAHS: A novel harmony search algorithm based on learning automata

    NASA Astrophysics Data System (ADS)

    Enayatifar, Rasul; Yousefi, Moslem; Abdullah, Abdul Hanan; Darus, Amer Nordin

    2013-12-01

    This study presents a learning automata-based harmony search (LAHS) for unconstrained optimization of continuous problems. The harmony search (HS) algorithm performance strongly depends on the fine tuning of its parameters, including the harmony consideration rate (HMCR), pitch adjustment rate (PAR) and bandwidth (bw). Inspired by the spur-in-time responses in the musical improvisation process, learning capabilities are employed in the HS to select these parameters based on spontaneous reactions. An extensive numerical investigation is conducted on several well-known test functions, and the results are compared with the HS algorithm and its prominent variants, including the improved harmony search (IHS), global-best harmony search (GHS) and self-adaptive global-best harmony search (SGHS). The numerical results indicate that the LAHS is more efficient in finding optimum solutions and outperforms the existing HS algorithm variants.

  14. The index-based subgraph matching algorithm (ISMA): fast subgraph enumeration in large networks using optimized search trees.

    PubMed

    Demeyer, Sofie; Michoel, Tom; Fostier, Jan; Audenaert, Pieter; Pickavet, Mario; Demeester, Piet

    2013-01-01

    Subgraph matching algorithms are designed to find all instances of predefined subgraphs in a large graph or network and play an important role in the discovery and analysis of so-called network motifs, subgraph patterns which occur more often than expected by chance. We present the index-based subgraph matching algorithm (ISMA), a novel tree-based algorithm. ISMA realizes a speedup compared to existing algorithms by carefully selecting the order in which the nodes of a query subgraph are investigated. In order to achieve this, we developed a number of data structures and maximally exploited symmetry characteristics of the subgraph. We compared ISMA to a naive recursive tree-based algorithm and to a number of well-known subgraph matching algorithms. Our algorithm outperforms the other algorithms, especially on large networks and with large query subgraphs. An implementation of ISMA in Java is freely available at http://sourceforge.net/projects/isma/. PMID:23620730

  15. The Index-Based Subgraph Matching Algorithm (ISMA): Fast Subgraph Enumeration in Large Networks Using Optimized Search Trees

    PubMed Central

    Demeyer, Sofie; Michoel, Tom; Fostier, Jan; Audenaert, Pieter; Pickavet, Mario; Demeester, Piet

    2013-01-01

    Subgraph matching algorithms are designed to find all instances of predefined subgraphs in a large graph or network and play an important role in the discovery and analysis of so-called network motifs, subgraph patterns which occur more often than expected by chance. We present the index-based subgraph matching algorithm (ISMA), a novel tree-based algorithm. ISMA realizes a speedup compared to existing algorithms by carefully selecting the order in which the nodes of a query subgraph are investigated. In order to achieve this, we developed a number of data structures and maximally exploited symmetry characteristics of the subgraph. We compared ISMA to a naive recursive tree-based algorithm and to a number of well-known subgraph matching algorithms. Our algorithm outperforms the other algorithms, especially on large networks and with large query subgraphs. An implementation of ISMA in Java is freely available at http://sourceforge.net/projects/isma/. PMID:23620730

  16. Adaptive optics image deconvolution based on a modified Richardson-Lucy algorithm

    NASA Astrophysics Data System (ADS)

    Chen, Bo; Geng, Ze-xun; Yan, Xiao-dong; Yang, Yang; Sui, Xue-lian; Zhao, Zhen-lei

    2007-12-01

    Adaptive optical (AO) system provides a real-time compensation for atmospheric turbulence. However, the correction is often only partial, and a deconvolution is required for reaching the diffraction limit. The Richardson-Lucy (R-L) Algorithm is the technique most widely used for AO image deconvolution, but Standard R-L Algorithm (SRLA) is often puzzled by speckling phenomenon, wraparound artifact and noise problem. A Modified R-L Algorithm (MRLA) for AO image deconvolution is presented. This novel algorithm applies Magain's correct sampling approach and incorporating noise statistics to Standard R-L Algorithm. The alternant iterative method is applied to estimate PSF and object in the novel algorithm. Comparing experiments for indoor data and AO image are done with SRLA and the MRLA in this paper. Experimental results show that this novel MRLA outperforms the SRLA.

  17. Improved satellite image compression and reconstruction via genetic algorithms

    NASA Astrophysics Data System (ADS)

    Babb, Brendan; Moore, Frank; Peterson, Michael; Lamont, Gary

    2008-10-01

    A wide variety of signal and image processing applications, including the US Federal Bureau of Investigation's fingerprint compression standard [3] and the JPEG-2000 image compression standard [26], utilize wavelets. This paper describes new research that demonstrates how a genetic algorithm (GA) may be used to evolve transforms that outperform wavelets for satellite image compression and reconstruction under conditions subject to quantization error. The new approach builds upon prior work by simultaneously evolving real-valued coefficients representing matched forward and inverse transform pairs at each of three levels of a multi-resolution analysis (MRA) transform. The training data for this investigation consists of actual satellite photographs of strategic urban areas. Test results show that a dramatic reduction in the error present in reconstructed satellite images may be achieved without sacrificing the compression capabilities of the forward transform. The transforms evolved during this research outperform previous start-of-the-art solutions, which optimized coefficients for the reconstruction transform only. These transforms also outperform wavelets, reducing error by more than 0.76 dB at a quantization level of 64. In addition, transforms trained using representative satellite images do not perform quite as well when subsequently tested against images from other classes (such as fingerprints or portraits). This result suggests that the GA developed for this research is automatically learning to exploit specific attributes common to the class of images represented in the training population.

  18. Applications of genetic algorithms and neural networks to interatomic potentials

    NASA Astrophysics Data System (ADS)

    Hobday, Steven; Smith, Roger; BelBruno, Joe

    1999-06-01

    Applications of two modern artificial intelligence (AI) techniques, genetic algorithms (GA) and neural networks (NN) to computer simulations are reported. It is shown that the GA are very useful tools for determining the minimum energy structures of clusters of atoms described by interatomic potential functions and generally outperform other optimisation methods for this task. A number of applications are given including covalent, and close packed structures of single or multi-component atomic species. It is also shown that (many body) interatomic potential functions for multi-component systems can be derived by training a specially constructed NN on a variety of structural data.

  19. A Novel Activated-Charcoal-Doped Multiwalled Carbon Nanotube Hybrid for Quasi-Solid-State Dye-Sensitized Solar Cell Outperforming Pt Electrode.

    PubMed

    Arbab, Alvira Ayoub; Sun, Kyung Chul; Sahito, Iftikhar Ali; Qadir, Muhammad Bilal; Choi, Yun Seon; Jeong, Sung Hoon

    2016-03-23

    Highly conductive mesoporous carbon structures based on multiwalled carbon nanotubes (MWCNTs) and activated charcoal (AC) were synthesized by an enzymatic dispersion method. The synthesized carbon configuration consists of synchronized structures of highly conductive MWCNT and porous activated charcoal morphology. The proposed carbon structure was used as counter electrode (CE) for quasi-solid-state dye-sensitized solar cells (DSSCs). The AC-doped MWCNT hybrid showed much enhanced electrocatalytic activity (ECA) toward polymer gel electrolyte and revealed a charge transfer resistance (RCT) of 0.60 Ω, demonstrating a fast electron transport mechanism. The exceptional electrocatalytic activity and high conductivity of the AC-doped MWCNT hybrid CE are associated with its synchronized features of high surface area and electronic conductivity, which produces higher interfacial reaction with the quasi-solid electrolyte. Morphological studies confirm the forms of amorphous and conductive 3D carbon structure with high density of CNT colloid. The excessive oxygen surface groups and defect-rich structure can entrap an excessive volume of quasi-solid electrolyte and locate multiple sites for iodide/triiodide catalytic reaction. The resultant D719 DSSC composed of this novel hybrid CE fabricated with polymer gel electrolyte demonstrated an efficiency of 10.05% with a high fill factor (83%), outperforming the Pt electrode. Such facile synthesis of CE together with low cost and sustainability supports the proposed DSSCs' structure to stand out as an efficient next-generation photovoltaic device. PMID:26911208

  20. Adult Cleaner Wrasse Outperform Capuchin Monkeys, Chimpanzees and Orang-utans in a Complex Foraging Task Derived from Cleaner – Client Reef Fish Cooperation

    PubMed Central

    Proctor, Darby; Essler, Jennifer; Pinto, Ana I.; Wismer, Sharon; Stoinski, Tara; Brosnan, Sarah F.; Bshary, Redouan

    2012-01-01

    The insight that animals' cognitive abilities are linked to their evolutionary history, and hence their ecology, provides the framework for the comparative approach. Despite primates renowned dietary complexity and social cognition, including cooperative abilities, we here demonstrate that cleaner wrasse outperform three primate species, capuchin monkeys, chimpanzees and orang-utans, in a foraging task involving a choice between two actions, both of which yield identical immediate rewards, but only one of which yields an additional delayed reward. The foraging task decisions involve partner choice in cleaners: they must service visiting client reef fish before resident clients to access both; otherwise the former switch to a different cleaner. Wild caught adult, but not juvenile, cleaners learned to solve the task quickly and relearned the task when it was reversed. The majority of primates failed to perform above chance after 100 trials, which is in sharp contrast to previous studies showing that primates easily learn to choose an action that yields immediate double rewards compared to an alternative action. In conclusion, the adult cleaners' ability to choose a superior action with initially neutral consequences is likely due to repeated exposure in nature, which leads to specific learned optimal foraging decision rules. PMID:23185293

  1. Genetic algorithms as discovery programs

    SciTech Connect

    Hilliard, M.R.; Liepins, G.

    1986-01-01

    Genetic algorithms are mathematical counterparts to natural selection and gene recombination. As such, they have provided one of the few significant breakthroughs in machine learning. Used with appropriate reward functions and apportionment of credit, they have been successfully applied to gas pipeline operation, x-ray registration and mathematical optimization problems. This paper discusses the basics of genetic algorithms, describes a few successes, and reports on current progress at Oak Ridge National Laboratory in applications to set covering and simulated robots.

  2. PCA-LBG-based algorithms for VQ codebook generation

    NASA Astrophysics Data System (ADS)

    Tsai, Jinn-Tsong; Yang, Po-Yuan

    2015-04-01

    Vector quantisation (VQ) codebooks are generated by combining principal component analysis (PCA) algorithms with Linde-Buzo-Gray (LBG) algorithms. All training vectors are grouped according to the projected values of the principal components. The PCA-LBG-based algorithms include (1) PCA-LBG-Median, which selects the median vector of each group, (2) PCA-LBG-Centroid, which adopts the centroid vector of each group, and (3) PCA-LBG-Random, which randomly selects a vector of each group. The LBG algorithm finds a codebook based on the better vectors sent to an initial codebook by the PCA. The PCA performs an orthogonal transformation to convert a set of potentially correlated variables into a set of variables that are not linearly correlated. Because the orthogonal transformation efficiently distinguishes test image vectors, the proposed PCA-LBG-based algorithm is expected to outperform conventional algorithms in designing VQ codebooks. The experimental results confirm that the proposed PCA-LBG-based algorithms indeed obtain better results compared to existing methods reported in the literature.

  3. Another hybrid conjugate gradient algorithm for unconstrained optimization

    NASA Astrophysics Data System (ADS)

    Andrei, Neculai

    2008-02-01

    Another hybrid conjugate gradient algorithm is subject to analysis. The parameter ? k is computed as a convex combination of beta ^{{HS}}_{k} (Hestenes-Stiefel) and beta ^{{DY}}_{k} (Dai-Yuan) algorithms, i.eE beta ^{C}_{k} = {left( {1 - theta _{k} } right)}beta ^{{HS}}_{k} + theta _{k} beta ^{{DY}}_{k} . The parameter ? k in the convex combination is computed in such a way so that the direction corresponding to the conjugate gradient algorithm to be the Newton direction and the pair (s k , y k ) to satisfy the quasi-Newton equation nabla ^{2} f{left( {x_{{k + 1}} } right)}s_{k} = y_{k} , where s_{k} = x_{{k + 1}} - x_{k} and y_{k} = g_{{k + 1}} - g_{k} . The algorithm uses the standard Wolfe line search conditions. Numerical comparisons with conjugate gradient algorithms show that this hybrid computational scheme outperforms the Hestenes-Stiefel and the Dai-Yuan conjugate gradient algorithms as well as the hybrid conjugate gradient algorithms of Dai and Yuan. A set of 750 unconstrained optimization problems are used, some of them from the CUTE library.

  4. Stability of Bareiss algorithm

    NASA Astrophysics Data System (ADS)

    Bojanczyk, Adam W.; Brent, Richard P.; de Hoog, F. R.

    1991-12-01

    In this paper, we present a numerical stability analysis of Bareiss algorithm for solving a symmetric positive definite Toeplitz system of linear equations. We also compare Bareiss algorithm with Levinson algorithm and conclude that the former has superior numerical properties.

  5. An Energy Aware Adaptive Sampling Algorithm for Energy Harvesting WSN with Energy Hungry Sensors

    PubMed Central

    Srbinovski, Bruno; Magno, Michele; Edwards-Murphy, Fiona; Pakrashi, Vikram; Popovici, Emanuel

    2016-01-01

    Wireless sensor nodes have a limited power budget, though they are often expected to be functional in the field once deployed for extended periods of time. Therefore, minimization of energy consumption and energy harvesting technology in Wireless Sensor Networks (WSN) are key tools for maximizing network lifetime, and achieving self-sustainability. This paper proposes an energy aware Adaptive Sampling Algorithm (ASA) for WSN with power hungry sensors and harvesting capabilities, an energy management technique that can be implemented on any WSN platform with enough processing power to execute the proposed algorithm. An existing state-of-the-art ASA developed for wireless sensor networks with power hungry sensors is optimized and enhanced to adapt the sampling frequency according to the available energy of the node. The proposed algorithm is evaluated using two in-field testbeds that are supplied by two different energy harvesting sources (solar and wind). Simulation and comparison between the state-of-the-art ASA and the proposed energy aware ASA (EASA) in terms of energy durability are carried out using in-field measured harvested energy (using both wind and solar sources) and power hungry sensors (ultrasonic wind sensor and gas sensors). The simulation results demonstrate that using ASA in combination with an energy aware function on the nodes can drastically increase the lifetime of a WSN node and enable self-sustainability. In fact, the proposed EASA in conjunction with energy harvesting capability can lead towards perpetual WSN operation and significantly outperform the state-of-the-art ASA. PMID:27043559

  6. Novel lung IMRT planning algorithms with nonuniform dose delivery strategy to account for respiratory motion.

    PubMed

    Li, Xiang; Zhang, Pengpeng; Mah, Dennis; Gewanter, Richard; Kutcher, Gerald

    2006-09-01

    To effectively deliver radiation dose to lung tumors, respiratory motion has to be considered in treatment planning. In this paper we first present a new lung IMRT planning algorithm, referred as the dose shaping (DS) method, that shapes the dose distribution according to the probability distribution of the tumor over the breathing cycle to account for respiratory motion. In IMRT planning a dose-based convolution method was generally adopted to compensate for random organ motion by performing 4-D dose calculations using a tumor motion probability density function. We modified the CON-DOSE method to a dose volume histogram based convolution method (CON-DVH) that allows nonuniform dose distribution to account for respiratory motion. We implemented the two new planning algorithms on an in-house IMRT planning system that uses the Eclipse (Varian, Palo Alto, CA) planning workstation as the dose calculation engine. The new algorithms were compared with (1) the conventional margin extension approach in which margin is generated based on the extreme positions of the tumor, (2) the dose-based convolution method, and (3) gating with 3 mm residual motion. Dose volume histogram, tumor control probability, normal tissue complication probability, and mean lung dose were calculated and used to evaluate the relative performance of these approaches at the end-exhale phase of the respiratory cycle. We recruited six patients in our treatment planning study. The study demonstrated that the two new methods could significantly reduce the ipsilateral normal lung dose and outperformed the margin extension method and the dose-based convolution method. Compared with the gated approach that has the best performance in the low dose region, the two methods we proposed have similar potential to escalate tumor dose, but could be more efficient because dose is delivered continuously. PMID:17022235

  7. An Energy Aware Adaptive Sampling Algorithm for Energy Harvesting WSN with Energy Hungry Sensors.

    PubMed

    Srbinovski, Bruno; Magno, Michele; Edwards-Murphy, Fiona; Pakrashi, Vikram; Popovici, Emanuel

    2016-01-01

    Wireless sensor nodes have a limited power budget, though they are often expected to be functional in the field once deployed for extended periods of time. Therefore, minimization of energy consumption and energy harvesting technology in Wireless Sensor Networks (WSN) are key tools for maximizing network lifetime, and achieving self-sustainability. This paper proposes an energy aware Adaptive Sampling Algorithm (ASA) for WSN with power hungry sensors and harvesting capabilities, an energy management technique that can be implemented on any WSN platform with enough processing power to execute the proposed algorithm. An existing state-of-the-art ASA developed for wireless sensor networks with power hungry sensors is optimized and enhanced to adapt the sampling frequency according to the available energy of the node. The proposed algorithm is evaluated using two in-field testbeds that are supplied by two different energy harvesting sources (solar and wind). Simulation and comparison between the state-of-the-art ASA and the proposed energy aware ASA (EASA) in terms of energy durability are carried out using in-field measured harvested energy (using both wind and solar sources) and power hungry sensors (ultrasonic wind sensor and gas sensors). The simulation results demonstrate that using ASA in combination with an energy aware function on the nodes can drastically increase the lifetime of a WSN node and enable self-sustainability. In fact, the proposed EASA in conjunction with energy harvesting capability can lead towards perpetual WSN operation and significantly outperform the state-of-the-art ASA. PMID:27043559

  8. Large scale tracking algorithms.

    SciTech Connect

    Hansen, Ross L.; Love, Joshua Alan; Melgaard, David Kennett; Karelitz, David B.; Pitts, Todd Alan; Zollweg, Joshua David; Anderson, Dylan Z.; Nandy, Prabal; Whitlow, Gary L.; Bender, Daniel A.; Byrne, Raymond Harry

    2015-01-01

    Low signal-to-noise data processing algorithms for improved detection, tracking, discrimination and situational threat assessment are a key research challenge. As sensor technologies progress, the number of pixels will increase signi cantly. This will result in increased resolution, which could improve object discrimination, but unfortunately, will also result in a significant increase in the number of potential targets to track. Many tracking techniques, like multi-hypothesis trackers, suffer from a combinatorial explosion as the number of potential targets increase. As the resolution increases, the phenomenology applied towards detection algorithms also changes. For low resolution sensors, "blob" tracking is the norm. For higher resolution data, additional information may be employed in the detection and classfication steps. The most challenging scenarios are those where the targets cannot be fully resolved, yet must be tracked and distinguished for neighboring closely spaced objects. Tracking vehicles in an urban environment is an example of such a challenging scenario. This report evaluates several potential tracking algorithms for large-scale tracking in an urban environment.

  9. Developing dataflow algorithms

    SciTech Connect

    Hiromoto, R.E. ); Bohm, A.P.W. . Dept. of Computer Science)

    1991-01-01

    Our goal is to study the performance of a collection of numerical algorithms written in Id which is available to users of Motorola's dataflow machine Monsoon. We will study the dataflow performance of these implementations first under the parallel profiling simulator Id World, and second in comparison with actual dataflow execution on the Motorola Monsoon. This approach will allow us to follow the computational and structural details of the parallel algorithms as implemented on dataflow systems. When running our programs on the Id World simulator we will examine the behaviour of algorithms at dataflow graph level, where each instruction takes one timestep and data becomes available at the next. This implies that important machine level phenomena such as the effect that global communication time may have on the computation are not addressed. These phenomena will be addressed when we run our programs on the Monsoon hardware. Potential ramifications for compilation techniques, functional programming style, and program efficiency are significant to this study. In a later stage of our research we will compare the efficiency of Id programs to programs written in other languages. This comparison will be of a rather qualitative nature as there are too many degrees of freedom in a language implementation for a quantitative comparison to be of interest. We begin our study by examining one routine that exhibit different computational characteristics. This routine and its corresponding characteristics is Fast Fourier Transforms; computational parallelism and data dependences between the butterfly shuffles.

  10. Chinese Tallow Trees (Triadica sebifera) from the Invasive Range Outperform Those from the Native Range with an Active Soil Community or Phosphorus Fertilization

    PubMed Central

    Zhang, Ling; Zhang, Yaojun; Wang, Hong; Zou, Jianwen; Siemann, Evan

    2013-01-01

    Two mechanisms that have been proposed to explain success of invasive plants are unusual biotic interactions, such as enemy release or enhanced mutualisms, and increased resource availability. However, while these mechanisms are usually considered separately, both may be involved in successful invasions. Biotic interactions may be positive or negative and may interact with nutritional resources in determining invasion success. In addition, the effects of different nutrients on invasions may vary. Finally, genetic variation in traits between populations located in introduced versus native ranges may be important for biotic interactions and/or resource use. Here, we investigated the roles of soil biota, resource availability, and plant genetic variation using seedlings of Triadica sebifera in an experiment in the native range (China). We manipulated nitrogen (control or 4 g/m2), phosphorus (control or 0.5 g/m2), soil biota (untreated or sterilized field soil), and plant origin (4 populations from the invasive range, 4 populations from the native range) in a full factorial experiment. Phosphorus addition increased root, stem, and leaf masses. Leaf mass and height growth depended on population origin and soil sterilization. Invasive populations had higher leaf mass and growth rates than native populations did in fresh soil but they had lower, comparable leaf mass and growth rates in sterilized soil. Invasive populations had higher growth rates with phosphorus addition but native ones did not. Soil sterilization decreased specific leaf area in both native and exotic populations. Negative effects of soil sterilization suggest that soil pathogens may not be as important as soil mutualists for T. sebifera performance. Moreover, interactive effects of sterilization and origin suggest that invasive T. sebifera may have evolved more beneficial relationships with the soil biota. Overall, seedlings from the invasive range outperformed those from the native range, however, an

  11. Ultimate failure of the Lévy Foraging Hypothesis: Two-scale searching strategies outperform scale-free ones even when prey are scarce and cryptic.

    PubMed

    Benhamou, Simon; Collet, Julien

    2015-12-21

    The "Lévy Foraging Hypothesis" promotes Lévy walk (LW) as the best strategy to forage for patchily but unpredictably located prey. This strategy mixes extensive and intensive searching phases in a mostly cue-free way through strange, scale-free kinetics. It is however less efficient than a cue-driven two-scale Composite Brownian walk (CBW) when the resources encountered are systematically detected. Nevertheless, it could be assumed that the intrinsic capacity of LW to trigger cue-free intensive searching at random locations might be advantageous when resources are not only scarcely encountered but also so cryptic that the probability to detect those encountered during movement is low. Surprisingly, this situation, which should be quite common in natural environments, has almost never been studied. Only a few studies have considered "saltatory" foragers, which are fully "blind" while moving and thus detect prey only during scanning pauses, but none of them compared the efficiency of LW vs. CBW in this context or in less extreme contexts where the detection probability during movement is not null but very low. In a study based on computer simulations, we filled the bridge between the concepts of "pure continuous" and "pure saltatory" foraging by considering that the probability to detect resources encountered while moving may range from 0 to 1. We showed that regularly stopping to scan the environment can indeed improve efficiency, but only at very low detection probabilities. Furthermore, the LW is then systematically outperformed by a mixed cue-driven/internally-driven CBW. It is thus more likely that evolution tends to favour strategies that rely on environmental feedbacks rather than on strange kinetics. PMID:26463680

  12. Chinese tallow trees (Triadica sebifera) from the invasive range outperform those from the native range with an active soil community or phosphorus fertilization.

    PubMed

    Zhang, Ling; Zhang, Yaojun; Wang, Hong; Zou, Jianwen; Siemann, Evan

    2013-01-01

    Two mechanisms that have been proposed to explain success of invasive plants are unusual biotic interactions, such as enemy release or enhanced mutualisms, and increased resource availability. However, while these mechanisms are usually considered separately, both may be involved in successful invasions. Biotic interactions may be positive or negative and may interact with nutritional resources in determining invasion success. In addition, the effects of different nutrients on invasions may vary. Finally, genetic variation in traits between populations located in introduced versus native ranges may be important for biotic interactions and/or resource use. Here, we investigated the roles of soil biota, resource availability, and plant genetic variation using seedlings of Triadica sebifera in an experiment in the native range (China). We manipulated nitrogen (control or 4 g/m(2)), phosphorus (control or 0.5 g/m(2)), soil biota (untreated or sterilized field soil), and plant origin (4 populations from the invasive range, 4 populations from the native range) in a full factorial experiment. Phosphorus addition increased root, stem, and leaf masses. Leaf mass and height growth depended on population origin and soil sterilization. Invasive populations had higher leaf mass and growth rates than native populations did in fresh soil but they had lower, comparable leaf mass and growth rates in sterilized soil. Invasive populations had higher growth rates with phosphorus addition but native ones did not. Soil sterilization decreased specific leaf area in both native and exotic populations. Negative effects of soil sterilization suggest that soil pathogens may not be as important as soil mutualists for T. sebifera performance. Moreover, interactive effects of sterilization and origin suggest that invasive T. sebifera may have evolved more beneficial relationships with the soil biota. Overall, seedlings from the invasive range outperformed those from the native range, however

  13. Droplet digital polymerase chain reaction (PCR) outperforms real-time PCR in the detection of environmental DNA from an invasive fish species.

    PubMed

    Doi, Hideyuki; Takahara, Teruhiko; Minamoto, Toshifumi; Matsuhashi, Saeko; Uchii, Kimiko; Yamanaka, Hiroki

    2015-05-01

    Environmental DNA (eDNA) has been used to investigate species distributions in aquatic ecosystems. Most of these studies use real-time polymerase chain reaction (PCR) to detect eDNA in water; however, PCR amplification is often inhibited by the presence of organic and inorganic matter. In droplet digital PCR (ddPCR), the sample is partitioned into thousands of nanoliter droplets, and PCR inhibition may be reduced by the detection of the end-point of PCR amplification in each droplet, independent of the amplification efficiency. In addition, real-time PCR reagents can affect PCR amplification and consequently alter detection rates. We compared the effectiveness of ddPCR and real-time PCR using two different PCR reagents for the detection of the eDNA from invasive bluegill sunfish, Lepomis macrochirus, in ponds. We found that ddPCR had higher detection rates of bluegill eDNA in pond water than real-time PCR with either of the PCR reagents, especially at low DNA concentrations. Limits of DNA detection, which were tested by spiking the bluegill DNA to DNA extracts from the ponds containing natural inhibitors, found that ddPCR had higher detection rate than real-time PCR. Our results suggest that ddPCR is more resistant to the presence of PCR inhibitors in field samples than real-time PCR. Thus, ddPCR outperforms real-time PCR methods for detecting eDNA to document species distributions in natural habitats, especially in habitats with high concentrations of PCR inhibitors. PMID:25850372

  14. A Guiding Evolutionary Algorithm with Greedy Strategy for Global Optimization Problems

    PubMed Central

    Cao, Leilei; Xu, Lihong; Goodman, Erik D.

    2016-01-01

    A Guiding Evolutionary Algorithm (GEA) with greedy strategy for global optimization problems is proposed. Inspired by Particle Swarm Optimization, the Genetic Algorithm, and the Bat Algorithm, the GEA was designed to retain some advantages of each method while avoiding some disadvantages. In contrast to the usual Genetic Algorithm, each individual in GEA is crossed with the current global best one instead of a randomly selected individual. The current best individual served as a guide to attract offspring to its region of genotype space. Mutation was added to offspring according to a dynamic mutation probability. To increase the capability of exploitation, a local search mechanism was applied to new individuals according to a dynamic probability of local search. Experimental results show that GEA outperformed the other three typical global optimization algorithms with which it was compared. PMID:27293421

  15. New Enhanced Artificial Bee Colony (JA-ABC5) Algorithm with Application for Reactive Power Optimization

    PubMed Central

    2015-01-01

    The standard artificial bee colony (ABC) algorithm involves exploration and exploitation processes which need to be balanced for enhanced performance. This paper proposes a new modified ABC algorithm named JA-ABC5 to enhance convergence speed and improve the ability to reach the global optimum by balancing exploration and exploitation processes. New stages have been proposed at the earlier stages of the algorithm to increase the exploitation process. Besides that, modified mutation equations have also been introduced in the employed and onlooker-bees phases to balance the two processes. The performance of JA-ABC5 has been analyzed on 27 commonly used benchmark functions and tested to optimize the reactive power optimization problem. The performance results have clearly shown that the newly proposed algorithm has outperformed other compared algorithms in terms of convergence speed and global optimum achievement. PMID:25879054

  16. A Guiding Evolutionary Algorithm with Greedy Strategy for Global Optimization Problems.

    PubMed

    Cao, Leilei; Xu, Lihong; Goodman, Erik D

    2016-01-01

    A Guiding Evolutionary Algorithm (GEA) with greedy strategy for global optimization problems is proposed. Inspired by Particle Swarm Optimization, the Genetic Algorithm, and the Bat Algorithm, the GEA was designed to retain some advantages of each method while avoiding some disadvantages. In contrast to the usual Genetic Algorithm, each individual in GEA is crossed with the current global best one instead of a randomly selected individual. The current best individual served as a guide to attract offspring to its region of genotype space. Mutation was added to offspring according to a dynamic mutation probability. To increase the capability of exploitation, a local search mechanism was applied to new individuals according to a dynamic probability of local search. Experimental results show that GEA outperformed the other three typical global optimization algorithms with which it was compared. PMID:27293421

  17. A Study on the Optimization Performance of Fireworks and Cuckoo Search Algorithms in Laser Machining Processes

    NASA Astrophysics Data System (ADS)

    Goswami, D.; Chakraborty, S.

    2014-11-01

    Laser machining is a promising non-contact process for effective machining of difficult-to-process advanced engineering materials. Increasing interest in the use of lasers for various machining operations can be attributed to its several unique advantages, like high productivity, non-contact processing, elimination of finishing operations, adaptability to automation, reduced processing cost, improved product quality, greater material utilization, minimum heat-affected zone and green manufacturing. To achieve the best desired machining performance and high quality characteristics of the machined components, it is extremely important to determine the optimal values of the laser machining process parameters. In this paper, fireworks algorithm and cuckoo search (CS) algorithm are applied for single as well as multi-response optimization of two laser machining processes. It is observed that although almost similar solutions are obtained for both these algorithms, CS algorithm outperforms fireworks algorithm with respect to average computation time, convergence rate and performance consistency.

  18. Success on Algorithmic and LOCS vs. Conceptual Chemistry Exam Questions

    NASA Astrophysics Data System (ADS)

    Zoller, Uri; Lubezky, Aviva; Nakhleh, Mary B.; Tessier, Barbara; Dori, Yehudit J.

    1995-11-01

    The performance of freshman science, engineering, and in-service teacher students in three Israeli and American universities on algorithmic, lower-order cognitive skills (LOCS), and conceptual chemistry exam questions was investigated. The driving force for the study was an interest in moving chemistry instruction from an algorithm-oriented factual recall approach dominated by LOCS to a decision-making, problem-solving, and critical thinking approach dominated by higher-order cognitive skills (HOCS). Students' responses to the specially designed algorithmic, LOCS, and conceptual exam questions were scored and analyzed for correlations and for differences between the means within and across universities by the question's category. The main findings were: (1) students in all three universities performed consistently on each of the three categories in the order of algorithmic > LOCS > conceptual questions, (2) success on algorithmic does not imply success on conceptual, or even on LOCS questions, and (3) students taught in small classes outperformed by far those in large lecture sessions in all three categories. The implied paradigm shift from an algorithmic/LOCS to a conceptual/HOCS orientation should be moved from a research-based theoretical domain to actual implementation in order for a meaningful improvement of chemistry teaching to occur.

  19. ETD: an extended time delay algorithm for ventricular fibrillation detection.

    PubMed

    Kim, Jungyoon; Chu, Chao-Hsien

    2014-01-01

    Ventricular fibrillation (VF) is the most serious type of heart attack which requires quick detection and first aid to improve patients' survival rates. To be most effective in using wearable devices for VF detection, it is vital that the detection algorithms be accurate, robust, reliable and computationally efficient. Previous studies and our experiments both indicate that the time-delay (TD) algorithm has a high reliability for separating sinus rhythm (SR) from VF and is resistant to variable factors, such as window size and filtering method. However, it fails to detect some VF cases. In this paper, we propose an extended time-delay (ETD) algorithm for VF detection and conduct experiments comparing the performance of ETD against five good VF detection algorithms, including TD, using the popular Creighton University (CU) database. Our study shows that (1) TD and ETD outperform the other four algorithms considered and (2) with the same sensitivity setting, ETD improves upon TD in three other quality measures for up to 7.64% and in terms of aggregate accuracy, the ETD algorithm shows an improvement of 2.6% of the area under curve (AUC) compared to TD. PMID:25571480

  20. Library of Continuation Algorithms

    2005-03-01

    LOCA (Library of Continuation Algorithms) is scientific software written in C++ that provides advanced analysis tools for nonlinear systems. In particular, it provides parameter continuation algorithms. bifurcation tracking algorithms, and drivers for linear stability analysis. The algorithms are aimed at large-scale applications that use Newton’s method for their nonlinear solve.

  1. The BR eigenvalue algorithm

    SciTech Connect

    Geist, G.A.; Howell, G.W.; Watkins, D.S.

    1997-11-01

    The BR algorithm, a new method for calculating the eigenvalues of an upper Hessenberg matrix, is introduced. It is a bulge-chasing algorithm like the QR algorithm, but, unlike the QR algorithm, it is well adapted to computing the eigenvalues of the narrowband, nearly tridiagonal matrices generated by the look-ahead Lanczos process. This paper describes the BR algorithm and gives numerical evidence that it works well in conjunction with the Lanczos process. On the biggest problems run so far, the BR algorithm beats the QR algorithm by a factor of 30--60 in computing time and a factor of over 100 in matrix storage space.

  2. G/SPLINES: A hybrid of Friedman's Multivariate Adaptive Regression Splines (MARS) algorithm with Holland's genetic algorithm

    NASA Technical Reports Server (NTRS)

    Rogers, David

    1991-01-01

    G/SPLINES are a hybrid of Friedman's Multivariable Adaptive Regression Splines (MARS) algorithm with Holland's Genetic Algorithm. In this hybrid, the incremental search is replaced by a genetic search. The G/SPLINE algorithm exhibits performance comparable to that of the MARS algorithm, requires fewer least squares computations, and allows significantly larger problems to be considered.

  3. A total variation diminishing finite difference algorithm for sonic boom propagation models

    NASA Technical Reports Server (NTRS)

    Sparrow, Victor W.

    1993-01-01

    It is difficult to accurately model the rise phases of sonic boom waveforms with traditional finite difference algorithms because of finite difference phase dispersion. This paper introduces the concept of a total variation diminishing (TVD) finite difference method as a tool for accurately modeling the rise phases of sonic booms. A standard second order finite difference algorithm and its TVD modified counterpart are both applied to the one-way propagation of a square pulse. The TVD method clearly outperforms the non-TVD method, showing great potential as a new computational tool in the analysis of sonic boom propagation.

  4. Growth algorithms for lattice heteropolymers at low temperatures

    NASA Astrophysics Data System (ADS)

    Hsu, Hsiao-Ping; Mehra, Vishal; Nadler, Walter; Grassberger, Peter

    2003-01-01

    Two improved versions of the pruned-enriched-Rosenbluth method (PERM) are proposed and tested on simple models of lattice heteropolymers. Both are found to outperform not only the previous version of PERM, but also all other stochastic algorithms which have been employed on this problem, except for the core directed chain growth method (CG) of Beutler and Dill. In nearly all test cases they are faster in finding low-energy states, and in many cases they found new lowest energy states missed in previous papers. The CG method is superior to our method in some cases, but less efficient in others. On the other hand, the CG method uses heavily heuristics based on presumptions about the hydrophobic core and does not give thermodynamic properties, while the present method is a fully blind general purpose algorithm giving correct Boltzmann-Gibbs weights, and can be applied in principle to any stochastic sampling problem.

  5. Split Bregman's algorithm for three-dimensional mesh segmentation

    NASA Astrophysics Data System (ADS)

    Habiba, Nabi; Ali, Douik

    2016-05-01

    Variational methods have attracted a lot of attention in the literature, especially for image and mesh segmentation. The methods aim at minimizing the energy to optimize both edge and region detections. We propose a spectral mesh decomposition algorithm to obtain disjoint but meaningful regions of an input mesh. The related optimization problem is nonconvex, and it is very difficult to find a good approximation or global optimum, which represents a challenge in computer vision. We propose an alternating split Bregman algorithm for mesh segmentation, where we extended the image-dedicated model to a three-dimensional (3-D) mesh one. By applying our scheme to 3-D mesh segmentation, we obtain fast solvers that can outperform various conventional ones, such as graph-cut and primal dual methods. A consistent evaluation of the proposed method on various public domain 3-D databases for different metrics is elaborated, and a comparison with the state-of-the-art is performed.

  6. A vertical handoff decision algorithm based on ARMA prediction model

    NASA Astrophysics Data System (ADS)

    Li, Ru; Shen, Jiao; Chen, Jun; Liu, Qiuhuan

    2011-12-01

    With the development of computer technology and the increasing demand for mobile communications, the next generation wireless networks will be composed of various wireless networks (e.g., WiMAX and WiFi). Vertical handoff is a key technology of next generation wireless networks. During the vertical handoff procedure, handoff decision is a crucial issue for an efficient mobility. Based on auto regression moving average (ARMA) prediction model, we propose a vertical handoff decision algorithm, which aims to improve the performance of vertical handoff and avoid unnecessary handoff. Based on the current received signal strength (RSS) and the previous RSS, the proposed approach adopt ARMA model to predict the next RSS. And then according to the predicted RSS to determine whether trigger the link layer triggering event and complete vertical handoff. The simulation results indicate that the proposed algorithm outperforms the RSS-based scheme with a threshold in the performance of handoff and the number of handoff.

  7. A vertical handoff decision algorithm based on ARMA prediction model

    NASA Astrophysics Data System (ADS)

    Li, Ru; Shen, Jiao; Chen, Jun; Liu, Qiuhuan

    2012-01-01

    With the development of computer technology and the increasing demand for mobile communications, the next generation wireless networks will be composed of various wireless networks (e.g., WiMAX and WiFi). Vertical handoff is a key technology of next generation wireless networks. During the vertical handoff procedure, handoff decision is a crucial issue for an efficient mobility. Based on auto regression moving average (ARMA) prediction model, we propose a vertical handoff decision algorithm, which aims to improve the performance of vertical handoff and avoid unnecessary handoff. Based on the current received signal strength (RSS) and the previous RSS, the proposed approach adopt ARMA model to predict the next RSS. And then according to the predicted RSS to determine whether trigger the link layer triggering event and complete vertical handoff. The simulation results indicate that the proposed algorithm outperforms the RSS-based scheme with a threshold in the performance of handoff and the number of handoff.

  8. Memetic algorithms for ligand expulsion from protein cavities

    NASA Astrophysics Data System (ADS)

    Rydzewski, J.; Nowak, W.

    2015-09-01

    Ligand diffusion through a protein interior is a fundamental process governing biological signaling and enzymatic catalysis. A complex topology of channels in proteins leads often to difficulties in modeling ligand escape pathways by classical molecular dynamics simulations. In this paper, two novel memetic methods for searching the exit paths and cavity space exploration are proposed: Memory Enhanced Random Acceleration (MERA) Molecular Dynamics (MD) and Immune Algorithm (IA). In MERA, a pheromone concept is introduced to optimize an expulsion force. In IA, hybrid learning protocols are exploited to predict ligand exit paths. They are tested on three protein channels with increasing complexity: M2 muscarinic G-protein-coupled receptor, enzyme nitrile hydratase, and heme-protein cytochrome P450cam. In these cases, the memetic methods outperform simulated annealing and random acceleration molecular dynamics. The proposed algorithms are general and appropriate in all problems where an accelerated transport of an object through a network of channels is studied.

  9. Memetic algorithms for ligand expulsion from protein cavities.

    PubMed

    Rydzewski, J; Nowak, W

    2015-09-28

    Ligand diffusion through a protein interior is a fundamental process governing biological signaling and enzymatic catalysis. A complex topology of channels in proteins leads often to difficulties in modeling ligand escape pathways by classical molecular dynamics simulations. In this paper, two novel memetic methods for searching the exit paths and cavity space exploration are proposed: Memory Enhanced Random Acceleration (MERA) Molecular Dynamics (MD) and Immune Algorithm (IA). In MERA, a pheromone concept is introduced to optimize an expulsion force. In IA, hybrid learning protocols are exploited to predict ligand exit paths. They are tested on three protein channels with increasing complexity: M2 muscarinic G-protein-coupled receptor, enzyme nitrile hydratase, and heme-protein cytochrome P450cam. In these cases, the memetic methods outperform simulated annealing and random acceleration molecular dynamics. The proposed algorithms are general and appropriate in all problems where an accelerated transport of an object through a network of channels is studied. PMID:26428990

  10. A new distributed systems scheduling algorithm: a swarm intelligence approach

    NASA Astrophysics Data System (ADS)

    Haghi Kashani, Mostafa; Sarvizadeh, Raheleh; Jameii, Mahdi

    2011-12-01

    The scheduling problem in distributed systems is known as an NP-complete problem, and methods based on heuristic or metaheuristic search have been proposed to obtain optimal and suboptimal solutions. The task scheduling is a key factor for distributed systems to gain better performance. In this paper, an efficient method based on memetic algorithm is developed to solve the problem of distributed systems scheduling. With regard to load balancing efficiently, Artificial Bee Colony (ABC) has been applied as local search in the proposed memetic algorithm. The proposed method has been compared to existing memetic-Based approach in which Learning Automata method has been used as local search. The results demonstrated that the proposed method outperform the above mentioned method in terms of communication cost.

  11. Algorithms Bridging Quantum Computation and Chemistry

    NASA Astrophysics Data System (ADS)

    McClean, Jarrod Ryan

    The design of new materials and chemicals derived entirely from computation has long been a goal of computational chemistry, and the governing equation whose solution would permit this dream is known. Unfortunately, the exact solution to this equation has been far too expensive and clever approximations fail in critical situations. Quantum computers offer a novel solution to this problem. In this work, we develop not only new algorithms to use quantum computers to study hard problems in chemistry, but also explore how such algorithms can help us to better understand and improve our traditional approaches. In particular, we first introduce a new method, the variational quantum eigensolver, which is designed to maximally utilize the quantum resources available in a device to solve chemical problems. We apply this method in a real quantum photonic device in the lab to study the dissociation of the helium hydride (HeH+) molecule. We also enhance this methodology with architecture specific optimizations on ion trap computers and show how linear-scaling techniques from traditional quantum chemistry can be used to improve the outlook of similar algorithms on quantum computers. We then show how studying quantum algorithms such as these can be used to understand and enhance the development of classical algorithms. In particular we use a tool from adiabatic quantum computation, Feynman's Clock, to develop a new discrete time variational principle and further establish a connection between real-time quantum dynamics and ground state eigenvalue problems. We use these tools to develop two novel parallel-in-time quantum algorithms that outperform competitive algorithms as well as offer new insights into the connection between the fermion sign problem of ground states and the dynamical sign problem of quantum dynamics. Finally we use insights gained in the study of quantum circuits to explore a general notion of sparsity in many-body quantum systems. In particular we use

  12. A Bayesian algorithm for detecting differentially expressed proteins and its application in breast cancer research

    PubMed Central

    Santra, Tapesh; Delatola, Eleni Ioanna

    2016-01-01

    Presence of considerable noise and missing data points make analysis of mass-spectrometry (MS) based proteomic data a challenging task. The missing values in MS data are caused by the inability of MS machines to reliably detect proteins whose abundances fall below the detection limit. We developed a Bayesian algorithm that exploits this knowledge and uses missing data points as a complementary source of information to the observed protein intensities in order to find differentially expressed proteins by analysing MS based proteomic data. We compared its accuracy with many other methods using several simulated datasets. It consistently outperformed other methods. We then used it to analyse proteomic screens of a breast cancer (BC) patient cohort. It revealed large differences between the proteomic landscapes of triple negative and Luminal A, which are the most and least aggressive types of BC. Unexpectedly, majority of these differences could be attributed to the direct transcriptional activity of only seven transcription factors some of which are known to be inactive in triple negative BC. We also identified two new proteins which significantly correlated with the survival of BC patients, and therefore may have potential diagnostic/prognostic values. PMID:27444576

  13. A novel functional module detection algorithm for protein-protein interaction networks

    PubMed Central

    Hwang, Woochang; Cho, Young-Rae; Zhang, Aidong; Ramanathan, Murali

    2006-01-01

    Background The sparse connectivity of protein-protein interaction data sets makes identification of functional modules challenging. The purpose of this study is to critically evaluate a novel clustering technique for clustering and detecting functional modules in protein-protein interaction networks, termed STM. Results STM selects representative proteins for each cluster and iteratively refines clusters based on a combination of the signal transduced and graph topology. STM is found to be effective at detecting clusters with a diverse range of interaction structures that are significant on measures of biological relevance. The STM approach is compared to six competing approaches including the maximum clique, quasi-clique, minimum cut, betweeness cut and Markov Clustering (MCL) algorithms. The clusters obtained by each technique are compared for enrichment of biological function. STM generates larger clusters and the clusters identified have p-values that are approximately 125-fold better than the other methods on biological function. An important strength of STM is that the percentage of proteins that are discarded to create clusters is much lower than the other approaches. Conclusion STM outperforms competing approaches and is capable of effectively detecting both densely and sparsely connected, biologically relevant functional modules with fewer discards. PMID:17147822

  14. A novel Iterative algorithm to text segmentation for web born-digital images

    NASA Astrophysics Data System (ADS)

    Xu, Zhigang; Zhu, Yuesheng; Sun, Ziqiang; Liu, Zhen

    2015-07-01

    Since web born-digital images have low resolution and dense text atoms, text region over-merging and miss detection are still two open issues to be addressed. In this paper a novel iterative algorithm is proposed to locate and segment text regions. In each iteration, the candidate text regions are generated by detecting Maximally Stable Extremal Region (MSER) with diminishing thresholds, and categorized into different groups based on a new similarity graph, and the texted region groups are identified by applying several features and rules. With our proposed overlap checking method the final well-segmented text regions are selected from these groups in all iterations. Experiments have been carried out on the web born-digital image datasets used for robust reading competition in ICDAR 2011 and 2013, and the results demonstrate that our proposed scheme can significantly reduce both the number of over-merge regions and the lost rate of target atoms, and the overall performance outperforms the best compared with the methods shown in the two competitions in term of recall rate and f-score at the cost of slightly higher computational complexity.

  15. A Bayesian algorithm for detecting differentially expressed proteins and its application in breast cancer research

    NASA Astrophysics Data System (ADS)

    Santra, Tapesh; Delatola, Eleni Ioanna

    2016-07-01

    Presence of considerable noise and missing data points make analysis of mass-spectrometry (MS) based proteomic data a challenging task. The missing values in MS data are caused by the inability of MS machines to reliably detect proteins whose abundances fall below the detection limit. We developed a Bayesian algorithm that exploits this knowledge and uses missing data points as a complementary source of information to the observed protein intensities in order to find differentially expressed proteins by analysing MS based proteomic data. We compared its accuracy with many other methods using several simulated datasets. It consistently outperformed other methods. We then used it to analyse proteomic screens of a breast cancer (BC) patient cohort. It revealed large differences between the proteomic landscapes of triple negative and Luminal A, which are the most and least aggressive types of BC. Unexpectedly, majority of these differences could be attributed to the direct transcriptional activity of only seven transcription factors some of which are known to be inactive in triple negative BC. We also identified two new proteins which significantly correlated with the survival of BC patients, and therefore may have potential diagnostic/prognostic values.

  16. A Bayesian algorithm for detecting differentially expressed proteins and its application in breast cancer research.

    PubMed

    Santra, Tapesh; Delatola, Eleni Ioanna

    2016-01-01

    Presence of considerable noise and missing data points make analysis of mass-spectrometry (MS) based proteomic data a challenging task. The missing values in MS data are caused by the inability of MS machines to reliably detect proteins whose abundances fall below the detection limit. We developed a Bayesian algorithm that exploits this knowledge and uses missing data points as a complementary source of information to the observed protein intensities in order to find differentially expressed proteins by analysing MS based proteomic data. We compared its accuracy with many other methods using several simulated datasets. It consistently outperformed other methods. We then used it to analyse proteomic screens of a breast cancer (BC) patient cohort. It revealed large differences between the proteomic landscapes of triple negative and Luminal A, which are the most and least aggressive types of BC. Unexpectedly, majority of these differences could be attributed to the direct transcriptional activity of only seven transcription factors some of which are known to be inactive in triple negative BC. We also identified two new proteins which significantly correlated with the survival of BC patients, and therefore may have potential diagnostic/prognostic values. PMID:27444576

  17. Rayleigh-Rice Mixture Parameter Estimation via EM Algorithm for Change Detection in Multispectral Images.

    PubMed

    Zanetti, Massimo; Bovolo, Francesca; Bruzzone, Lorenzo

    2015-12-01

    The problem of estimating the parameters of a Rayleigh-Rice mixture density is often encountered in image analysis (e.g., remote sensing and medical image processing). In this paper, we address this general problem in the framework of change detection (CD) in multitemporal and multispectral images. One widely used approach to CD in multispectral images is based on the change vector analysis. Here, the distribution of the magnitude of the difference image can be theoretically modeled by a Rayleigh-Rice mixture density. However, given the complexity of this model, in applications, a Gaussian-mixture approximation is often considered, which may affect the CD results. In this paper, we present a novel technique for parameter estimation of the Rayleigh-Rice density that is based on a specific definition of the expectation-maximization algorithm. The proposed technique, which is characterized by good theoretical properties, iteratively updates the parameters and does not depend on specific optimization routines. Several numerical experiments on synthetic data demonstrate the effectiveness of the method, which is general and can be applied to any image processing problem involving the Rayleigh-Rice mixture density. In the CD context, the Rayleigh-Rice model (which is theoretically derived) outperforms other empirical models. Experiments on real multitemporal and multispectral remote sensing images confirm the validity of the model by returning significantly higher CD accuracies than those obtained by using the state-of-the-art approaches. PMID:26336124

  18. Threshold extended ID3 algorithm

    NASA Astrophysics Data System (ADS)

    Kumar, A. B. Rajesh; Ramesh, C. Phani; Madhusudhan, E.; Padmavathamma, M.

    2012-04-01

    Information exchange over insecure networks needs to provide authentication and confidentiality to the database in significant problem in datamining. In this paper we propose a novel authenticated multiparty ID3 Algorithm used to construct multiparty secret sharing decision tree for implementation in medical transactions.

  19. Performance of a cavity-method-based algorithm for the prize-collecting Steiner tree problem on graphs

    NASA Astrophysics Data System (ADS)

    Biazzo, Indaco; Braunstein, Alfredo; Zecchina, Riccardo

    2012-08-01

    We study the behavior of an algorithm derived from the cavity method for the prize-collecting steiner tree (PCST) problem on graphs. The algorithm is based on the zero temperature limit of the cavity equations and as such is formally simple (a fixed point equation resolved by iteration) and distributed (parallelizable). We provide a detailed comparison with state-of-the-art algorithms on a wide range of existing benchmarks, networks, and random graphs. Specifically, we consider an enhanced derivative of the Goemans-Williamson heuristics and the dhea solver, a branch and cut integer linear programming based approach. The comparison shows that the cavity algorithm outperforms the two algorithms in most large instances both in running time and quality of the solution. Finally we prove a few optimality properties of the solutions provided by our algorithm, including optimality under the two postprocessing procedures defined in the Goemans-Williamson derivative and global optimality in some limit cases.

  20. Comparison of switching control algorithms effective in restricting the switching in the neighborhood of the origin

    NASA Astrophysics Data System (ADS)

    Joung, JinWook; Smyth, Andrew W.; Chung, Lan

    2010-06-01

    The active interaction control (AIC) system consisting of a primary structure, an auxiliary structure and an interaction element was proposed to protect the primary structure against earthquakes and winds. The objective of the AIC system in reducing the responses of the primary structure is fulfilled by activating or deactivating the switching between the engagement and the disengagement of the primary and auxiliary structures through the interaction element. The status of the interaction element is controlled by switching control algorithms. The previously developed switching control algorithms require an excessive amount of switching, which is inefficient. In this paper, the excessive amount of switching is restricted by imposing an appropriately designed switching boundary region, where switching is prohibited, on pre-designed engagement-disengagement conditions. Two different approaches are used in designing the newly proposed AID-off and AID-off2 algorithms. The AID-off2 algorithm is designed to affect deactivated switching regions explicitly, unlike the AID-off algorithm, which follows the same procedure of designing the engagement-disengagement conditions of the previously developed algorithms, by using the current status of the AIC system. Both algorithms are shown to be effective in reducing the amount of switching times triggered from the previously developed AID algorithm under an appropriately selected control sampling period for different earthquakes, but the AID-off2 algorithm outperforms the AID-off algorithm in reducing the number of switching times.

  1. Sort-Mid tasks scheduling algorithm in grid computing.

    PubMed

    Reda, Naglaa M; Tawfik, A; Marzok, Mohamed A; Khamis, Soheir M

    2015-11-01

    Scheduling tasks on heterogeneous resources distributed over a grid computing system is an NP-complete problem. The main aim for several researchers is to develop variant scheduling algorithms for achieving optimality, and they have shown a good performance for tasks scheduling regarding resources selection. However, using of the full power of resources is still a challenge. In this paper, a new heuristic algorithm called Sort-Mid is proposed. It aims to maximizing the utilization and minimizing the makespan. The new strategy of Sort-Mid algorithm is to find appropriate resources. The base step is to get the average value via sorting list of completion time of each task. Then, the maximum average is obtained. Finally, the task has the maximum average is allocated to the machine that has the minimum completion time. The allocated task is deleted and then, these steps are repeated until all tasks are allocated. Experimental tests show that the proposed algorithm outperforms almost other algorithms in terms of resources utilization and makespan. PMID:26644937

  2. A Novel Tracking Algorithm via Feature Points Matching

    PubMed Central

    Luo, Nan; Sun, Quansen; Chen, Qiang; Ji, Zexuan; Xia, Deshen

    2015-01-01

    Visual target tracking is a primary task in many computer vision applications and has been widely studied in recent years. Among all the tracking methods, the mean shift algorithm has attracted extraordinary interest and been well developed in the past decade due to its excellent performance. However, it is still challenging for the color histogram based algorithms to deal with the complex target tracking. Therefore, the algorithms based on other distinguishing features are highly required. In this paper, we propose a novel target tracking algorithm based on mean shift theory, in which a new type of image feature is introduced and utilized to find the corresponding region between the neighbor frames. The target histogram is created by clustering the features obtained in the extraction strategy. Then, the mean shift process is adopted to calculate the target location iteratively. Experimental results demonstrate that the proposed algorithm can deal with the challenging tracking situations such as: partial occlusion, illumination change, scale variations, object rotation and complex background clutter. Meanwhile, it outperforms several state-of-the-art methods. PMID:25617769

  3. A Novel Multiobjective Evolutionary Algorithm Based on Regression Analysis

    PubMed Central

    Song, Zhiming; Wang, Maocai; Dai, Guangming; Vasile, Massimiliano

    2015-01-01

    As is known, the Pareto set of a continuous multiobjective optimization problem with m objective functions is a piecewise continuous (m − 1)-dimensional manifold in the decision space under some mild conditions. However, how to utilize the regularity to design multiobjective optimization algorithms has become the research focus. In this paper, based on this regularity, a model-based multiobjective evolutionary algorithm with regression analysis (MMEA-RA) is put forward to solve continuous multiobjective optimization problems with variable linkages. In the algorithm, the optimization problem is modelled as a promising area in the decision space by a probability distribution, and the centroid of the probability distribution is (m − 1)-dimensional piecewise continuous manifold. The least squares method is used to construct such a model. A selection strategy based on the nondominated sorting is used to choose the individuals to the next generation. The new algorithm is tested and compared with NSGA-II and RM-MEDA. The result shows that MMEA-RA outperforms RM-MEDA and NSGA-II on the test instances with variable linkages. At the same time, MMEA-RA has higher efficiency than the other two algorithms. A few shortcomings of MMEA-RA have also been identified and discussed in this paper. PMID:25874246

  4. Significant lexical relationships

    SciTech Connect

    Pedersen, T.; Kayaalp, M.; Bruce, R.

    1996-12-31

    Statistical NLP inevitably deals with a large number of rare events. As a consequence, NLP data often violates the assumptions implicit in traditional statistical procedures such as significance testing. We describe a significance test, an exact conditional test, that is appropriate for NLP data and can be performed using freely available software. We apply this test to the study of lexical relationships and demonstrate that the results obtained using this test are both theoretically more reliable and different from the results obtained using previously applied tests.

  5. An automatic and fast centerline extraction algorithm for virtual colonoscopy.

    PubMed

    Jiang, Guangxiang; Gu, Lixu

    2005-01-01

    This paper introduces a new refined centerline extraction algorithm, which is based on and significantly improved from distance mapping algorithms. The new approach include three major parts: employing a colon segmentation method; designing and realizing a fast Euclidean Transform algorithm and inducting boundary voxels cutting (BVC) approach. The main contribution is the BVC processing, which greatly speeds up the Dijkstra algorithm and improves the whole performance of the new algorithm. Experimental results demonstrate that the new centerline algorithm was more efficient and accurate comparing with existing algorithms. PMID:17281406

  6. Statistical Significance Testing.

    ERIC Educational Resources Information Center

    McLean, James E., Ed.; Kaufman, Alan S., Ed.

    1998-01-01

    The controversy about the use or misuse of statistical significance testing has become the major methodological issue in educational research. This special issue contains three articles that explore the controversy, three commentaries on these articles, an overall response, and three rejoinders by the first three authors. They are: (1)…

  7. Lack of Statistical Significance

    ERIC Educational Resources Information Center

    Kehle, Thomas J.; Bray, Melissa A.; Chafouleas, Sandra M.; Kawano, Takuji

    2007-01-01

    Criticism has been leveled against the use of statistical significance testing (SST) in many disciplines. However, the field of school psychology has been largely devoid of critiques of SST. Inspection of the primary journals in school psychology indicated numerous examples of SST with nonrandom samples and/or samples of convenience. In this…

  8. Reasoning about systolic algorithms

    SciTech Connect

    Purushothaman, S.

    1986-01-01

    Systolic algorithms are a class of parallel algorithms, with small grain concurrency, well suited for implementation in VLSI. They are intended to be implemented as high-performance, computation-bound back-end processors and are characterized by a tesselating interconnection of identical processing elements. This dissertation investigates the problem of providing correctness of systolic algorithms. The following are reported in this dissertation: (1) a methodology for verifying correctness of systolic algorithms based on solving the representation of an algorithm as recurrence equations. The methodology is demonstrated by proving the correctness of a systolic architecture for optimal parenthesization. (2) The implementation of mechanical proofs of correctness of two systolic algorithms, a convolution algorithm and an optimal parenthesization algorithm, using the Boyer-Moore theorem prover. (3) An induction principle for proving correctness of systolic arrays which are modular. Two attendant inference rules, weak equivalence and shift transformation, which capture equivalent behavior of systolic arrays, are also presented.

  9. Algorithm-development activities

    NASA Technical Reports Server (NTRS)

    Carder, Kendall L.

    1994-01-01

    The task of algorithm-development activities at USF continues. The algorithm for determining chlorophyll alpha concentration, (Chl alpha) and gelbstoff absorption coefficient for SeaWiFS and MODIS-N radiance data is our current priority.

  10. Evaluation of Electroencephalography Source Localization Algorithms with Multiple Cortical Sources

    PubMed Central

    Bradley, Allison; Yao, Jun; Dewald, Jules; Richter, Claus-Peter

    2016-01-01

    Background Source localization algorithms often show multiple active cortical areas as the source of electroencephalography (EEG). Yet, there is little data quantifying the accuracy of these results. In this paper, the performance of current source density source localization algorithms for the detection of multiple cortical sources of EEG data has been characterized. Methods EEG data were generated by simulating multiple cortical sources (2–4) with the same strength or two sources with relative strength ratios of 1:1 to 4:1, and adding noise. These data were used to reconstruct the cortical sources using current source density (CSD) algorithms: sLORETA, MNLS, and LORETA using a p-norm with p equal to 1, 1.5 and 2. Precision (percentage of the reconstructed activity corresponding to simulated activity) and Recall (percentage of the simulated sources reconstructed) of each of the CSD algorithms were calculated. Results While sLORETA has the best performance when only one source is present, when two or more sources are present LORETA with p equal to 1.5 performs better. When the relative strength of one of the sources is decreased, all algorithms have more difficulty reconstructing that source. However, LORETA 1.5 continues to outperform other algorithms. If only the strongest source is of interest sLORETA is recommended, while LORETA with p equal to 1.5 is recommended if two or more of the cortical sources are of interest. These results provide guidance for choosing a CSD algorithm to locate multiple cortical sources of EEG and for interpreting the results of these algorithms. PMID:26809000

  11. Accurate Finite Difference Algorithms

    NASA Technical Reports Server (NTRS)

    Goodrich, John W.

    1996-01-01

    Two families of finite difference algorithms for computational aeroacoustics are presented and compared. All of the algorithms are single step explicit methods, they have the same order of accuracy in both space and time, with examples up to eleventh order, and they have multidimensional extensions. One of the algorithm families has spectral like high resolution. Propagation with high order and high resolution algorithms can produce accurate results after O(10(exp 6)) periods of propagation with eight grid points per wavelength.

  12. Evaluation of the OSC-TV iterative reconstruction algorithm for cone-beam optical CT

    SciTech Connect

    Matenine, Dmitri Mascolo-Fortin, Julia; Goussard, Yves

    2015-11-15

    Purpose: The present work evaluates an iterative reconstruction approach, namely, the ordered subsets convex (OSC) algorithm with regularization via total variation (TV) minimization in the field of cone-beam optical computed tomography (optical CT). One of the uses of optical CT is gel-based 3D dosimetry for radiation therapy, where it is employed to map dose distributions in radiosensitive gels. Model-based iterative reconstruction may improve optical CT image quality and contribute to a wider use of optical CT in clinical gel dosimetry. Methods: This algorithm was evaluated using experimental data acquired by a cone-beam optical CT system, as well as complementary numerical simulations. A fast GPU implementation of OSC-TV was used to achieve reconstruction times comparable to those of conventional filtered backprojection. Images obtained via OSC-TV were compared with the corresponding filtered backprojections. Spatial resolution and uniformity phantoms were scanned and respective reconstructions were subject to evaluation of the modulation transfer function, image uniformity, and accuracy. The artifacts due to refraction and total signal loss from opaque objects were also studied. Results: The cone-beam optical CT data reconstructions showed that OSC-TV outperforms filtered backprojection in terms of image quality, thanks to a model-based simulation of the photon attenuation process. It was shown to significantly improve the image spatial resolution and reduce image noise. The accuracy of the estimation of linear attenuation coefficients remained similar to that obtained via filtered backprojection. Certain image artifacts due to opaque objects were reduced. Nevertheless, the common artifact due to the gel container walls could not be eliminated. Conclusions: The use of iterative reconstruction improves cone-beam optical CT image quality in many ways. The comparisons between OSC-TV and filtered backprojection presented in this paper demonstrate that OSC-TV can

  13. An algorithm for robust and efficient location of T-wave ends in electrocardiograms.

    PubMed

    Zhang, Qinghua; Manriquez, Alfredo Illanes; Médigue, Claire; Papelier, Yves; Sorine, Michel

    2006-12-01

    The purpose of this paper is to propose a new algorithm for T-wave end location in electrocardiograms, mainly through the computation of an indicator related to the area covered by the T-wave curve. Based on simple assumptions, essentially on the concavity of the T-wave form, it is formally proved that the maximum of the computed indicator inside each cardiac cycle coincides with the T-wave end. Moreover, the algorithm is robust to acquisition noise, to wave form morphological variations and to baseline wander. It is also computationally very simple: the main computation can be implemented as a simple finite impulse response filter. When evaluated with the PhysioNet QT database in terms of the mean and the standard deviation of the T-wave end location errors, the proposed algorithm outperforms the other algorithms evaluated with the same database, according to the most recent available publications up to our knowledge. PMID:17153212

  14. Research on B Cell Algorithm for Learning to Rank Method Based on Parallel Strategy.

    PubMed

    Tian, Yuling; Zhang, Hongxian

    2016-01-01

    For the purposes of information retrieval, users must find highly relevant documents from within a system (and often a quite large one comprised of many individual documents) based on input query. Ranking the documents according to their relevance within the system to meet user needs is a challenging endeavor, and a hot research topic-there already exist several rank-learning methods based on machine learning techniques which can generate ranking functions automatically. This paper proposes a parallel B cell algorithm, RankBCA, for rank learning which utilizes a clonal selection mechanism based on biological immunity. The novel algorithm is compared with traditional rank-learning algorithms through experimentation and shown to outperform the others in respect to accuracy, learning time, and convergence rate; taken together, the experimental results show that the proposed algorithm indeed effectively and rapidly identifies optimal ranking functions. PMID:27487242

  15. Research on B Cell Algorithm for Learning to Rank Method Based on Parallel Strategy

    PubMed Central

    Tian, Yuling; Zhang, Hongxian

    2016-01-01

    For the purposes of information retrieval, users must find highly relevant documents from within a system (and often a quite large one comprised of many individual documents) based on input query. Ranking the documents according to their relevance within the system to meet user needs is a challenging endeavor, and a hot research topic–there already exist several rank-learning methods based on machine learning techniques which can generate ranking functions automatically. This paper proposes a parallel B cell algorithm, RankBCA, for rank learning which utilizes a clonal selection mechanism based on biological immunity. The novel algorithm is compared with traditional rank-learning algorithms through experimentation and shown to outperform the others in respect to accuracy, learning time, and convergence rate; taken together, the experimental results show that the proposed algorithm indeed effectively and rapidly identifies optimal ranking functions. PMID:27487242

  16. Combining algorithms in automatic detection of QRS complexes in ECG signals.

    PubMed

    Meyer, Carsten; Fernández Gavela, José; Harris, Matthew

    2006-07-01

    QRS complex and specifically R-Peak detection is the crucial first step in every automatic electrocardiogram analysis. Much work has been carried out in this field, using various methods ranging from filtering and threshold methods, through wavelet methods, to neural networks and others. Performance is generally good, but each method has situations where it fails. In this paper, we suggest an approach to automatically combine different QRS complex detection algorithms, here the Pan-Tompkins and wavelet algorithms, to benefit from the strengths of both methods. In particular, we introduce parameters allowing to balance the contribution of the individual algorithms; these parameters are estimated in a data-driven way. Experimental results and analysis are provided on the Massachusetts Institute of Technology-Beth Israel Hospital (MIT-BIH) Arrhythmia Database. We show that our combination approach outperforms both individual algorithms. PMID:16871713

  17. Optimizing the Shunting Schedule of Electric Multiple Units Depot Using an Enhanced Particle Swarm Optimization Algorithm

    PubMed Central

    Jin, Junchen

    2016-01-01

    The shunting schedule of electric multiple units depot (SSED) is one of the essential plans for high-speed train maintenance activities. This paper presents a 0-1 programming model to address the problem of determining an optimal SSED through automatic computing. The objective of the model is to minimize the number of shunting movements and the constraints include track occupation conflicts, shunting routes conflicts, time durations of maintenance processes, and shunting running time. An enhanced particle swarm optimization (EPSO) algorithm is proposed to solve the optimization problem. Finally, an empirical study from Shanghai South EMU Depot is carried out to illustrate the model and EPSO algorithm. The optimization results indicate that the proposed method is valid for the SSED problem and that the EPSO algorithm outperforms the traditional PSO algorithm on the aspect of optimality. PMID:27436998

  18. Optimizing the Shunting Schedule of Electric Multiple Units Depot Using an Enhanced Particle Swarm Optimization Algorithm.

    PubMed

    Wang, Jiaxi; Lin, Boliang; Jin, Junchen

    2016-01-01

    The shunting schedule of electric multiple units depot (SSED) is one of the essential plans for high-speed train maintenance activities. This paper presents a 0-1 programming model to address the problem of determining an optimal SSED through automatic computing. The objective of the model is to minimize the number of shunting movements and the constraints include track occupation conflicts, shunting routes conflicts, time durations of maintenance processes, and shunting running time. An enhanced particle swarm optimization (EPSO) algorithm is proposed to solve the optimization problem. Finally, an empirical study from Shanghai South EMU Depot is carried out to illustrate the model and EPSO algorithm. The optimization results indicate that the proposed method is valid for the SSED problem and that the EPSO algorithm outperforms the traditional PSO algorithm on the aspect of optimality. PMID:27436998

  19. Design and Implementation of Broadcast Algorithms for Extreme-Scale Systems

    SciTech Connect

    Shamis, Pavel; Graham, Richard L; Gorentla Venkata, Manjunath; Ladd, Joshua

    2011-01-01

    The scalability and performance of collective communication operations limit the scalability and performance of many scientific applications. This paper presents two new blocking and nonblocking Broadcast algorithms for communicators with arbitrary communication topology, and studies their performance. These algorithms benefit from increased concurrency and a reduced memory footprint, making them suitable for use on large-scale systems. Measuring small, medium, and large data Broadcasts on a Cray-XT5, using 24,576 MPI processes, the Cheetah algorithms outperform the native MPI on that system by 51%, 69%, and 9%, respectively, at the same process count. These results demonstrate an algorithmic approach to the implementation of the important class of collective communications, which is high performing, scalable, and also uses resources in a scalable manner.

  20. A swarm intelligence based memetic algorithm for task allocation in distributed systems

    NASA Astrophysics Data System (ADS)

    Sarvizadeh, Raheleh; Haghi Kashani, Mostafa

    2011-12-01

    This paper proposes a Swarm Intelligence based Memetic algorithm for Task Allocation and scheduling in distributed systems. The tasks scheduling in distributed systems is known as an NP-complete problem. Hence, many genetic algorithms have been proposed for searching optimal solutions from entire solution space. However, these existing approaches are going to scan the entire solution space without considering the techniques that can reduce the complexity of the optimization. Spending too much time for doing scheduling is considered the main shortcoming of these approaches. Therefore, in this paper memetic algorithm has been used to cope with this shortcoming. With regard to load balancing efficiently, Bee Colony Optimization (BCO) has been applied as local search in the proposed memetic algorithm. Extended experimental results demonstrated that the proposed method outperformed the existing GA-based method in terms of CPU utilization.

  1. A swarm intelligence based memetic algorithm for task allocation in distributed systems

    NASA Astrophysics Data System (ADS)

    Sarvizadeh, Raheleh; Haghi Kashani, Mostafa

    2012-01-01

    This paper proposes a Swarm Intelligence based Memetic algorithm for Task Allocation and scheduling in distributed systems. The tasks scheduling in distributed systems is known as an NP-complete problem. Hence, many genetic algorithms have been proposed for searching optimal solutions from entire solution space. However, these existing approaches are going to scan the entire solution space without considering the techniques that can reduce the complexity of the optimization. Spending too much time for doing scheduling is considered the main shortcoming of these approaches. Therefore, in this paper memetic algorithm has been used to cope with this shortcoming. With regard to load balancing efficiently, Bee Colony Optimization (BCO) has been applied as local search in the proposed memetic algorithm. Extended experimental results demonstrated that the proposed method outperformed the existing GA-based method in terms of CPU utilization.

  2. Automatic control algorithm effects on energy production

    NASA Technical Reports Server (NTRS)

    Mcnerney, G. M.

    1981-01-01

    A computer model was developed using actual wind time series and turbine performance data to simulate the power produced by the Sandia 17-m VAWT operating in automatic control. The model was used to investigate the influence of starting algorithms on annual energy production. The results indicate that, depending on turbine and local wind characteristics, a bad choice of a control algorithm can significantly reduce overall energy production. The model can be used to select control algorithms and threshold parameters that maximize long term energy production. The results from local site and turbine characteristics were generalized to obtain general guidelines for control algorithm design.

  3. Significant Tsunami Events

    NASA Astrophysics Data System (ADS)

    Dunbar, P. K.; Furtney, M.; McLean, S. J.; Sweeney, A. D.

    2014-12-01

    Tsunamis have inflicted death and destruction on the coastlines of the world throughout history. The occurrence of tsunamis and the resulting effects have been collected and studied as far back as the second millennium B.C. The knowledge gained from cataloging and examining these events has led to significant changes in our understanding of tsunamis, tsunami sources, and methods to mitigate the effects of tsunamis. The most significant, not surprisingly, are often the most devastating, such as the 2011 Tohoku, Japan earthquake and tsunami. The goal of this poster is to give a brief overview of the occurrence of tsunamis and then focus specifically on several significant tsunamis. There are various criteria to determine the most significant tsunamis: the number of deaths, amount of damage, maximum runup height, had a major impact on tsunami science or policy, etc. As a result, descriptions will include some of the most costly (2011 Tohoku, Japan), the most deadly (2004 Sumatra, 1883 Krakatau), and the highest runup ever observed (1958 Lituya Bay, Alaska). The discovery of the Cascadia subduction zone as the source of the 1700 Japanese "Orphan" tsunami and a future tsunami threat to the U.S. northwest coast, contributed to the decision to form the U.S. National Tsunami Hazard Mitigation Program. The great Lisbon earthquake of 1755 marked the beginning of the modern era of seismology. Knowledge gained from the 1964 Alaska earthquake and tsunami helped confirm the theory of plate tectonics. The 1946 Alaska, 1952 Kuril Islands, 1960 Chile, 1964 Alaska, and the 2004 Banda Aceh, tsunamis all resulted in warning centers or systems being established.The data descriptions on this poster were extracted from NOAA's National Geophysical Data Center (NGDC) global historical tsunami database. Additional information about these tsunamis, as well as water level data can be found by accessing the NGDC website www.ngdc.noaa.gov/hazard/

  4. Synaptic dynamics: linear model and adaptation algorithm.

    PubMed

    Yousefi, Ali; Dibazar, Alireza A; Berger, Theodore W

    2014-08-01

    In this research, temporal processing in brain neural circuitries is addressed by a dynamic model of synaptic connections in which the synapse model accounts for both pre- and post-synaptic processes determining its temporal dynamics and strength. Neurons, which are excited by the post-synaptic potentials of hundred of the synapses, build the computational engine capable of processing dynamic neural stimuli. Temporal dynamics in neural models with dynamic synapses will be analyzed, and learning algorithms for synaptic adaptation of neural networks with hundreds of synaptic connections are proposed. The paper starts by introducing a linear approximate model for the temporal dynamics of synaptic transmission. The proposed linear model substantially simplifies the analysis and training of spiking neural networks. Furthermore, it is capable of replicating the synaptic response of the non-linear facilitation-depression model with an accuracy better than 92.5%. In the second part of the paper, a supervised spike-in-spike-out learning rule for synaptic adaptation in dynamic synapse neural networks (DSNN) is proposed. The proposed learning rule is a biologically plausible process, and it is capable of simultaneously adjusting both pre- and post-synaptic components of individual synapses. The last section of the paper starts with presenting the rigorous analysis of the learning algorithm in a system identification task with hundreds of synaptic connections which confirms the learning algorithm's accuracy, repeatability and scalability. The DSNN is utilized to predict the spiking activity of cortical neurons and pattern recognition tasks. The DSNN model is demonstrated to be a generative model capable of producing different cortical neuron spiking patterns and CA1 Pyramidal neurons recordings. A single-layer DSNN classifier on a benchmark pattern recognition task outperforms a 2-Layer Neural Network and GMM classifiers while having fewer numbers of free parameters and

  5. The E-Wave Deceleration Rate E/DT Outperforms the Tissue Doppler-Derived Index E/e' in Characterizing Lung Remodeling in Heart Failure with Preserved Ejection Fraction

    PubMed Central

    Nguyen, T. Dung; Shingu, Yasushige; Schwarzer, Michael; Schrepper, Andrea; Doenst, Torsten

    2013-01-01

    Background Diastolic dysfunction in heart failure with preserved ejection fraction (HFpEF) may result in pulmonary congestion and lung remodeling. We evaluated the usefulness of major diastolic echocardiographic parameters and of the deceleration rate of early transmitral diastolic velocity (E/DT) in predicting lung remodeling in a rat model of HFpEF. Methods and Results Rats underwent aortic banding (AoB) to induce pressure overload (PO). Left ventricular hypertrophy fully developed 2 weeks after AoB. At 4 and 6 weeks, the lung weight-to-body weight ratio (LW/BW), a sensitive marker for pulmonary congestion and remodeling, dramatically increased despite preserved fractional shortening, indicating the presence of HFpEF. The time course of LW/BW was well reflected by E/DT, by the ratio of early to late transmitral diastolic velocity (E/A) and the deceleration time of E (DT), but not by the ratio of transmitral to mitral annular early diastolic velocity (E/e'). In agreement, the best correlation with LW/BW was found for E/DT (r = 0.76; p<0.0001), followed by E/A (r = 0.69; p<0.0001), DT (r = −0.62; p<0.0001) and finally E/e' (r = 0.51; p<0.001). Furthermore, analysis of receiver-operating characteristic curves for the prediction of increased LW/BW revealed excellent area under the curve values for E/DT (AUC = 0.98) and DT (AUC = 0.95), which are significantly higher than that of E/e' (AUC = 0.82). In a second approach, we also found that the new parameter E/DT correlated well with right ventricular weight index and echocardiographic measures of right ventricular systolic function. Conclusions The novel parameter E/DT outperforms the tissue Doppler index E/e' in detecting and monitoring lung remodeling induced by pressure overload. The results may provide a handy tool to point towards secondary lung disease in HFpEF and warrant further clinical investigations. PMID:24312628

  6. Significant biases affecting abundance determinations

    NASA Astrophysics Data System (ADS)

    Wesson, Roger

    2015-08-01

    I have developed two highly efficient codes to automate analyses of emission line nebulae. The tools place particular emphasis on the propagation of uncertainties. The first tool, ALFA, uses a genetic algorithm to rapidly optimise the parameters of gaussian fits to line profiles. It can fit emission line spectra of arbitrary resolution, wavelength range and depth, with no user input at all. It is well suited to highly multiplexed spectroscopy such as that now being carried out with instruments such as MUSE at the VLT. The second tool, NEAT, carries out a full analysis of emission line fluxes, robustly propagating uncertainties using a Monte Carlo technique.Using these tools, I have found that considerable biases can be introduced into abundance determinations if the uncertainty distribution of emission lines is not well characterised. For weak lines, normally distributed uncertainties are generally assumed, though it is incorrect to do so, and significant biases can result. I discuss observational evidence of these biases. The two new codes contain routines to correctly characterise the probability distributions, giving more reliable results in analyses of emission line nebulae.

  7. Enhanced probability-selection artificial bee colony algorithm for economic load dispatch: A comprehensive analysis

    NASA Astrophysics Data System (ADS)

    Ghani Abro, Abdul; Mohamad-Saleh, Junita

    2014-10-01

    The prime motive of economic load dispatch (ELD) is to optimize the production cost of electrical power generation through appropriate division of load demand among online generating units. Bio-inspired optimization algorithms have outperformed classical techniques for optimizing the production cost. Probability-selection artificial bee colony (PS-ABC) algorithm is a recently proposed variant of ABC optimization algorithm. PS-ABC generates optimal solutions using three different mutation equations simultaneously. The results show improved performance of PS-ABC over the ABC algorithm. Nevertheless, all the mutation equations of PS-ABC are excessively self-reinforced and, hence, PS-ABC is prone to premature convergence. Therefore, this research work has replaced the mutation equations and has improved the scout-bee stage of PS-ABC for enhancing the algorithm's performance. The proposed algorithm has been compared with many ABC variants and numerous other optimization algorithms on benchmark functions and ELD test cases. The adapted ELD test cases comprise of transmission losses, multiple-fuel effect, valve-point effect and toxic gases emission constraints. The results reveal that the proposed algorithm has the best capability to yield the optimal solution for the problem among the compared algorithms.

  8. An Autonomous Star Identification Algorithm Based on One-Dimensional Vector Pattern for Star Sensors.

    PubMed

    Luo, Liyan; Xu, Luping; Zhang, Hua

    2015-01-01

    In order to enhance the robustness and accelerate the recognition speed of star identification, an autonomous star identification algorithm for star sensors is proposed based on the one-dimensional vector pattern (one_DVP). In the proposed algorithm, the space geometry information of the observed stars is used to form the one-dimensional vector pattern of the observed star. The one-dimensional vector pattern of the same observed star remains unchanged when the stellar image rotates, so the problem of star identification is simplified as the comparison of the two feature vectors. The one-dimensional vector pattern is adopted to build the feature vector of the star pattern, which makes it possible to identify the observed stars robustly. The characteristics of the feature vector and the proposed search strategy for the matching pattern make it possible to achieve the recognition result as quickly as possible. The simulation results demonstrate that the proposed algorithm can effectively accelerate the star identification. Moreover, the recognition accuracy and robustness by the proposed algorithm are better than those by the pyramid algorithm, the modified grid algorithm, and the LPT algorithm. The theoretical analysis and experimental results show that the proposed algorithm outperforms the other three star identification algorithms. PMID:26198233

  9. An Autonomous Star Identification Algorithm Based on One-Dimensional Vector Pattern for Star Sensors

    PubMed Central

    Luo, Liyan; Xu, Luping; Zhang, Hua

    2015-01-01

    In order to enhance the robustness and accelerate the recognition speed of star identification, an autonomous star identification algorithm for star sensors is proposed based on the one-dimensional vector pattern (one_DVP). In the proposed algorithm, the space geometry information of the observed stars is used to form the one-dimensional vector pattern of the observed star. The one-dimensional vector pattern of the same observed star remains unchanged when the stellar image rotates, so the problem of star identification is simplified as the comparison of the two feature vectors. The one-dimensional vector pattern is adopted to build the feature vector of the star pattern, which makes it possible to identify the observed stars robustly. The characteristics of the feature vector and the proposed search strategy for the matching pattern make it possible to achieve the recognition result as quickly as possible. The simulation results demonstrate that the proposed algorithm can effectively accelerate the star identification. Moreover, the recognition accuracy and robustness by the proposed algorithm are better than those by the pyramid algorithm, the modified grid algorithm, and the LPT algorithm. The theoretical analysis and experimental results show that the proposed algorithm outperforms the other three star identification algorithms. PMID:26198233

  10. Statistical or biological significance?

    PubMed

    Saxon, Emma

    2015-01-01

    Oat plants grown at an agricultural research facility produce higher yields in Field 1 than in Field 2, under well fertilised conditions and with similar weather exposure; all oat plants in both fields are healthy and show no sign of disease. In this study, the authors hypothesised that the soil microbial community might be different in each field, and these differences might explain the difference in oat plant growth. They carried out a metagenomic analysis of the 16 s ribosomal 'signature' sequences from bacteria in 50 randomly located soil samples in each field to determine the composition of the bacterial community. The study identified >1000 species, most of which were present in both fields. The authors identified two plant growth-promoting species that were significantly reduced in soil from Field 2 (Student's t-test P < 0.05), and concluded that these species might have contributed to reduced yield. PMID:26541972

  11. Anthropological significance of phenylketonuria.

    PubMed

    Saugstad, L F

    1975-01-01

    The highest incidence rates of phenylketonuria (PKU) have been observed in Ireland and Scotlant. Parents heterozygous for PKU in Norway differ significantly from the general population in the Rhesus, Kell and PGM systems. The parents investigated showed an excess of Rh negative, Kell plus and PGM type 1 individuals, which makes them similar to the present populations in Ireland and Scotlant. It is postulated that the heterozygotes for PKU in Norway are descended from a completely assimilated sub-population of Celtic origin, who came or were brought here, 1ooo years ago. Bronze objects of Western European (Scottish, Irish) origin, found in Viking graves widely distributed in Norway, have been taken as evidence of Vikings returning with loot (including a number of Celts) from Western Viking settlements. The continuity of residence since the Viking age in most habitable parts of Norway, and what seems to be a nearly complete regional relationship between the sites where Viking graves contain western imported objects and the birthplaces of grandparents of PKUs identified in Norway, lend further support to the hypothesis that the heterozygotes for PKU in Norway are descended from a completely assimilated subpopulation. The remarkable resemblance between Iceland and Ireland, in respect of several genetic markers (including the Rhesus, PGM and Kell systems), is considered to be an expression of a similar proportion of people of Celtic origin in each of the two countries. Their identical, high incidence rates of PKU are regarded as further evidence of this. The significant decline in the incidence of PKU when one passes from Ireland, Scotland and Iceland, to Denmark and on to Norway and Sweden, is therefore explained as being related to a reduction in the proportion of inhabitants of Celtic extraction in the respective populations. PMID:803884

  12. Enhancing artificial bee colony algorithm with self-adaptive searching strategy and artificial immune network operators for global optimization.

    PubMed

    Chen, Tinggui; Xiao, Renbin

    2014-01-01

    Artificial bee colony (ABC) algorithm, inspired by the intelligent foraging behavior of honey bees, was proposed by Karaboga. It has been shown to be superior to some conventional intelligent algorithms such as genetic algorithm (GA), artificial colony optimization (ACO), and particle swarm optimization (PSO). However, the ABC still has some limitations. For example, ABC can easily get trapped in the local optimum when handing in functions that have a narrow curving valley, a high eccentric ellipse, or complex multimodal functions. As a result, we proposed an enhanced ABC algorithm called EABC by introducing self-adaptive searching strategy and artificial immune network operators to improve the exploitation and exploration. The simulation results tested on a suite of unimodal or multimodal benchmark functions illustrate that the EABC algorithm outperforms ACO, PSO, and the basic ABC in most of the experiments. PMID:24772023

  13. Enhancing Artificial Bee Colony Algorithm with Self-Adaptive Searching Strategy and Artificial Immune Network Operators for Global Optimization

    PubMed Central

    Chen, Tinggui; Xiao, Renbin

    2014-01-01

    Artificial bee colony (ABC) algorithm, inspired by the intelligent foraging behavior of honey bees, was proposed by Karaboga. It has been shown to be superior to some conventional intelligent algorithms such as genetic algorithm (GA), artificial colony optimization (ACO), and particle swarm optimization (PSO). However, the ABC still has some limitations. For example, ABC can easily get trapped in the local optimum when handing in functions that have a narrow curving valley, a high eccentric ellipse, or complex multimodal functions. As a result, we proposed an enhanced ABC algorithm called EABC by introducing self-adaptive searching strategy and artificial immune network operators to improve the exploitation and exploration. The simulation results tested on a suite of unimodal or multimodal benchmark functions illustrate that the EABC algorithm outperforms ACO, PSO, and the basic ABC in most of the experiments. PMID:24772023

  14. Truss optimization on shape and sizing with frequency constraints based on orthogonal multi-gravitational search algorithm

    NASA Astrophysics Data System (ADS)

    Khatibinia, Mohsen; Sadegh Naseralavi, Seyed

    2014-12-01

    Structural optimization on shape and sizing with frequency constraints is well-known as a highly nonlinear dynamic optimization problem with several local optimum solutions. Hence, efficient optimization algorithms should be utilized to solve this problem. In this study, orthogonal multi-gravitational search algorithm (OMGSA) as a meta-heuristic algorithm is introduced to solve truss optimization on shape and sizing with frequency constraints. The OMGSA is a hybrid approach based on a combination of multi-gravitational search algorithm (multi-GSA) and an orthogonal crossover (OC). In multi-GSA, the population is split into several sub-populations. Then, each sub-population is independently evaluated by an improved gravitational search algorithm (IGSA). Furthermore, the OC is used in the proposed OMGSA in order to find and exploit the global solution in the search space. The capability of OMGSA is demonstrated through six benchmark examples. Numerical results show that the proposed OMGSA outperform the other optimization techniques.

  15. Multi-objective Job Shop Rescheduling with Evolutionary Algorithm

    NASA Astrophysics Data System (ADS)

    Hao, Xinchang; Gen, Mitsuo

    In current manufacturing systems, production processes and management are involved in many unexpected events and new requirements emerging constantly. This dynamic environment implies that operation rescheduling is usually indispensable. A wide variety of procedures and heuristics has been developed to improve the quality of rescheduling. However, most proposed approaches are derived usually with respect to simplified assumptions. As a consequence, these approaches might be inconsistent with the actual requirements in a real production environment, i.e., they are often unsuitable and inflexible to respond efficiently to the frequent changes. In this paper, a multi-objective job shop rescheduling problem (moJSRP) is formulated to improve the practical application of rescheduling. To solve the moJSRP model, an evolutionary algorithm is designed, in which a random key-based representation and interactive adaptive-weight (i-awEA) fitness assignment are embedded. To verify the effectiveness, the proposed algorithm has been compared with other apporaches and benchmarks on the robustness of moJRP optimziation. The comparison results show that iAWGA-A is better than weighted fitness method in terms of effectiveness and stability. Simlarly, iAWGA-A also outperforms other well stability approachessuch as non-dominated sorting genetic algorithm (NSGA-II) and strength Pareto evolutionary algorithm2 (SPEA2).

  16. Computations and algorithms in physical and biological problems

    NASA Astrophysics Data System (ADS)

    Qin, Yu

    This dissertation presents the applications of state-of-the-art computation techniques and data analysis algorithms in three physical and biological problems: assembling DNA pieces, optimizing self-assembly yield, and identifying correlations from large multivariate datasets. In the first topic, in-depth analysis of using Sequencing by Hybridization (SBH) to reconstruct target DNA sequences shows that a modified reconstruction algorithm can overcome the theoretical boundary without the need for different types of biochemical assays and is robust to error. In the second topic, consistent with theoretical predictions, simulations using Graphics Processing Unit (GPU) demonstrate how controlling the short-ranged interactions between particles and controlling the concentrations optimize the self-assembly yield of a desired structure, and nonequilibrium behavior when optimizing concentrations is also unveiled by leveraging the computation capacity of GPUs. In the last topic, a methodology to incorporate existing categorization information into the search process to efficiently reconstruct the optimal true correlation matrix for multivariate datasets is introduced. Simulations on both synthetic and real financial datasets show that the algorithm is able to detect signals below the Random Matrix Theory (RMT) threshold. These three problems are representatives of using massive computation techniques and data analysis algorithms to tackle optimization problems, and outperform theoretical boundary when incorporating prior information into the computation.

  17. The Kernel Adaptive Autoregressive-Moving-Average Algorithm.

    PubMed

    Li, Kan; Príncipe, José C

    2016-02-01

    In this paper, we present a novel kernel adaptive recurrent filtering algorithm based on the autoregressive-moving-average (ARMA) model, which is trained with recurrent stochastic gradient descent in the reproducing kernel Hilbert spaces. This kernelized recurrent system, the kernel adaptive ARMA (KAARMA) algorithm, brings together the theories of adaptive signal processing and recurrent neural networks (RNNs), extending the current theory of kernel adaptive filtering (KAF) using the representer theorem to include feedback. Compared with classical feedforward KAF methods, the KAARMA algorithm provides general nonlinear solutions for complex dynamical systems in a state-space representation, with a deferred teacher signal, by propagating forward the hidden states. We demonstrate its capabilities to provide exact solutions with compact structures by solving a set of benchmark nondeterministic polynomial-complete problems involving grammatical inference. Simulation results show that the KAARMA algorithm outperforms equivalent input-space recurrent architectures using first- and second-order RNNs, demonstrating its potential as an effective learning solution for the identification and synthesis of deterministic finite automata. PMID:25935049

  18. GPS-Free Localization Algorithm for Wireless Sensor Networks

    PubMed Central

    Wang, Lei; Xu, Qingzheng

    2010-01-01

    Localization is one of the most fundamental problems in wireless sensor networks, since the locations of the sensor nodes are critical to both network operations and most application level tasks. A GPS-free localization scheme for wireless sensor networks is presented in this paper. First, we develop a standardized clustering-based approach for the local coordinate system formation wherein a multiplication factor is introduced to regulate the number of master and slave nodes and the degree of connectivity among master nodes. Second, using homogeneous coordinates, we derive a transformation matrix between two Cartesian coordinate systems to efficiently merge them into a global coordinate system and effectively overcome the flip ambiguity problem. The algorithm operates asynchronously without a centralized controller; and does not require that the location of the sensors be known a priori. A set of parameter-setting guidelines for the proposed algorithm is derived based on a probability model and the energy requirements are also investigated. A simulation analysis on a specific numerical example is conducted to validate the mathematical analytical results. We also compare the performance of the proposed algorithm under a variety multiplication factor, node density and node communication radius scenario. Experiments show that our algorithm outperforms existing mechanisms in terms of accuracy and convergence time. PMID:22219694

  19. GPS-free localization algorithm for wireless sensor networks.

    PubMed

    Wang, Lei; Xu, Qingzheng

    2010-01-01

    Localization is one of the most fundamental problems in wireless sensor networks, since the locations of the sensor nodes are critical to both network operations and most application level tasks. A GPS-free localization scheme for wireless sensor networks is presented in this paper. First, we develop a standardized clustering-based approach for the local coordinate system formation wherein a multiplication factor is introduced to regulate the number of master and slave nodes and the degree of connectivity among master nodes. Second, using homogeneous coordinates, we derive a transformation matrix between two Cartesian coordinate systems to efficiently merge them into a global coordinate system and effectively overcome the flip ambiguity problem. The algorithm operates asynchronously without a centralized controller; and does not require that the location of the sensors be known a priori. A set of parameter-setting guidelines for the proposed algorithm is derived based on a probability model and the energy requirements are also investigated. A simulation analysis on a specific numerical example is conducted to validate the mathematical analytical results. We also compare the performance of the proposed algorithm under a variety multiplication factor, node density and node communication radius scenario. Experiments show that our algorithm outperforms existing mechanisms in terms of accuracy and convergence time. PMID:22219694

  20. Significant Radionuclides Determination

    SciTech Connect

    Jo A. Ziegler

    2001-07-31

    The purpose of this calculation is to identify radionuclides that are significant to offsite doses from potential preclosure events for spent nuclear fuel (SNF) and high-level radioactive waste expected to be received at the potential Monitored Geologic Repository (MGR). In this calculation, high-level radioactive waste is included in references to DOE SNF. A previous document, ''DOE SNF DBE Offsite Dose Calculations'' (CRWMS M&O 1999b), calculated the source terms and offsite doses for Department of Energy (DOE) and Naval SNF for use in design basis event analyses. This calculation reproduces only DOE SNF work (i.e., no naval SNF work is included in this calculation) created in ''DOE SNF DBE Offsite Dose Calculations'' and expands the calculation to include DOE SNF expected to produce a high dose consequence (even though the quantity of the SNF is expected to be small) and SNF owned by commercial nuclear power producers. The calculation does not address any specific off-normal/DBE event scenarios for receiving, handling, or packaging of SNF. The results of this calculation are developed for comparative analysis to establish the important radionuclides and do not represent the final source terms to be used for license application. This calculation will be used as input to preclosure safety analyses and is performed in accordance with procedure AP-3.12Q, ''Calculations'', and is subject to the requirements of DOE/RW-0333P, ''Quality Assurance Requirements and Description'' (DOE 2000) as determined by the activity evaluation contained in ''Technical Work Plan for: Preclosure Safety Analysis, TWP-MGR-SE-000010'' (CRWMS M&O 2000b) in accordance with procedure AP-2.21Q, ''Quality Determinations and Planning for Scientific, Engineering, and Regulatory Compliance Activities''.

  1. Fungi producing significant mycotoxins.

    PubMed

    2012-01-01

    Mycotoxins are secondary metabolites of microfungi that are known to cause sickness or death in humans or animals. Although many such toxic metabolites are known, it is generally agreed that only a few are significant in causing disease: aflatoxins, fumonisins, ochratoxin A, deoxynivalenol, zearalenone, and ergot alkaloids. These toxins are produced by just a few species from the common genera Aspergillus, Penicillium, Fusarium, and Claviceps. All Aspergillus and Penicillium species either are commensals, growing in crops without obvious signs of pathogenicity, or invade crops after harvest and produce toxins during drying and storage. In contrast, the important Fusarium and Claviceps species infect crops before harvest. The most important Aspergillus species, occurring in warmer climates, are A. flavus and A. parasiticus, which produce aflatoxins in maize, groundnuts, tree nuts, and, less frequently, other commodities. The main ochratoxin A producers, A. ochraceus and A. carbonarius, commonly occur in grapes, dried vine fruits, wine, and coffee. Penicillium verrucosum also produces ochratoxin A but occurs only in cool temperate climates, where it infects small grains. F. verticillioides is ubiquitous in maize, with an endophytic nature, and produces fumonisins, which are generally more prevalent when crops are under drought stress or suffer excessive insect damage. It has recently been shown that Aspergillus niger also produces fumonisins, and several commodities may be affected. F. graminearum, which is the major producer of deoxynivalenol and zearalenone, is pathogenic on maize, wheat, and barley and produces these toxins whenever it infects these grains before harvest. Also included is a short section on Claviceps purpurea, which produces sclerotia among the seeds in grasses, including wheat, barley, and triticale. The main thrust of the chapter contains information on the identification of these fungi and their morphological characteristics, as well as factors

  2. Mapping algorithms on regular parallel architectures

    SciTech Connect

    Lee, P.

    1989-01-01

    It is significant that many of time-intensive scientific algorithms are formulated as nested loops, which are inherently regularly structured. In this dissertation the relations between the mathematical structure of nested loop algorithms and the architectural capabilities required for their parallel execution are studied. The architectural model considered in depth is that of an arbitrary dimensional systolic array. The mathematical structure of the algorithm is characterized by classifying its data-dependence vectors according to the new ZERO-ONE-INFINITE property introduced. Using this classification, the first complete set of necessary and sufficient conditions for correct transformation of a nested loop algorithm onto a given systolic array of an arbitrary dimension by means of linear mappings is derived. Practical methods to derive optimal or suboptimal systolic array implementations are also provided. The techniques developed are used constructively to develop families of implementations satisfying various optimization criteria and to design programmable arrays efficiently executing classes of algorithms. In addition, a Computer-Aided Design system running on SUN workstations has been implemented to help in the design. The methodology, which deals with general algorithms, is illustrated by synthesizing linear and planar systolic array algorithms for matrix multiplication, a reindexed Warshall-Floyd transitive closure algorithm, and the longest common subsequence algorithm.

  3. A new frame-based registration algorithm.

    PubMed

    Yan, C H; Whalen, R T; Beaupre, G S; Sumanaweera, T S; Yen, S Y; Napel, S

    1998-01-01

    This paper presents a new algorithm for frame registration. Our algorithm requires only that the frame be comprised of straight rods, as opposed to the N structures or an accurate frame model required by existing algorithms. The algorithm utilizes the full 3D information in the frame as well as a least squares weighting scheme to achieve highly accurate registration. We use simulated CT data to assess the accuracy of our algorithm. We compare the performance of the proposed algorithm to two commonly used algorithms. Simulation results show that the proposed algorithm is comparable to the best existing techniques with knowledge of the exact mathematical frame model. For CT data corrupted with an unknown in-plane rotation or translation, the proposed technique is also comparable to the best existing techniques. However, in situations where there is a discrepancy of more than 2 mm (0.7% of the frame dimension) between the frame and the mathematical model, the proposed technique is significantly better (p < or = 0.05) than the existing techniques. The proposed algorithm can be applied to any existing frame without modification. It provides better registration accuracy and is robust against model mis-match. It allows greater flexibility on the frame structure. Lastly, it reduces the frame construction cost as adherence to a concise model is not required. PMID:9472834

  4. A new frame-based registration algorithm

    NASA Technical Reports Server (NTRS)

    Yan, C. H.; Whalen, R. T.; Beaupre, G. S.; Sumanaweera, T. S.; Yen, S. Y.; Napel, S.

    1998-01-01

    This paper presents a new algorithm for frame registration. Our algorithm requires only that the frame be comprised of straight rods, as opposed to the N structures or an accurate frame model required by existing algorithms. The algorithm utilizes the full 3D information in the frame as well as a least squares weighting scheme to achieve highly accurate registration. We use simulated CT data to assess the accuracy of our algorithm. We compare the performance of the proposed algorithm to two commonly used algorithms. Simulation results show that the proposed algorithm is comparable to the best existing techniques with knowledge of the exact mathematical frame model. For CT data corrupted with an unknown in-plane rotation or translation, the proposed technique is also comparable to the best existing techniques. However, in situations where there is a discrepancy of more than 2 mm (0.7% of the frame dimension) between the frame and the mathematical model, the proposed technique is significantly better (p < or = 0.05) than the existing techniques. The proposed algorithm can be applied to any existing frame without modification. It provides better registration accuracy and is robust against model mis-match. It allows greater flexibility on the frame structure. Lastly, it reduces the frame construction cost as adherence to a concise model is not required.

  5. Iterative phase retrieval algorithms. I: optimization.

    PubMed

    Guo, Changliang; Liu, Shi; Sheridan, John T

    2015-05-20

    Two modified Gerchberg-Saxton (GS) iterative phase retrieval algorithms are proposed. The first we refer to as the spatial phase perturbation GS algorithm (SPP GSA). The second is a combined GS hybrid input-output algorithm (GS/HIOA). In this paper (Part I), it is demonstrated that the SPP GS and GS/HIO algorithms are both much better at avoiding stagnation during phase retrieval, allowing them to successfully locate superior solutions compared with either the GS or the HIO algorithms. The performances of the SPP GS and GS/HIO algorithms are also compared. Then, the error reduction (ER) algorithm is combined with the HIO algorithm (ER/HIOA) to retrieve the input object image and the phase, given only some knowledge of its extent and the amplitude in the Fourier domain. In Part II, the algorithms developed here are applied to carry out known plaintext and ciphertext attacks on amplitude encoding and phase encoding double random phase encryption systems. Significantly, ER/HIOA is then used to carry out a ciphertext-only attack on AE DRPE systems. PMID:26192504

  6. Overview of an Algorithm Plugin Package (APP)

    NASA Astrophysics Data System (ADS)

    Linda, M.; Tilmes, C.; Fleig, A. J.

    2004-12-01

    Science software that runs operationally is fundamentally different than software that runs on a scientist's desktop. There are complexities in hosting software for automated production that are necessary and significant. Identifying common aspects of these complexities can simplify algorithm integration. We use NASA's MODIS and OMI data production systems as examples. An Algorithm Plugin Package (APP) is science software that is combined with algorithm-unique elements that permit the algorithm to interface with, and function within, the framework of a data processing system. The framework runs algorithms operationally against large quantities of data. The extra algorithm-unique items are constrained by the design of the data processing system. APPs often include infrastructure that is vastly similar. When the common elements in APPs are identified and abstracted, the cost of APP development, testing, and maintenance will be reduced. This paper is an overview of the extra algorithm-unique pieces that are shared between MODAPS and OMIDAPS APPs. Our exploration of APP structure will help builders of other production systems identify their common elements and reduce algorithm integration costs. Our goal is to complete the development of a library of functions and a menu of implementation choices that reflect common needs of APPs. The library and menu will reduce the time and energy required for science developers to integrate algorithms into production systems.

  7. An Effective Hybrid Cuckoo Search Algorithm with Improved Shuffled Frog Leaping Algorithm for 0-1 Knapsack Problems

    PubMed Central

    Wang, Gai-Ge; Feng, Qingjiang; Zhao, Xiang-Jun

    2014-01-01

    An effective hybrid cuckoo search algorithm (CS) with improved shuffled frog-leaping algorithm (ISFLA) is put forward for solving 0-1 knapsack problem. First of all, with the framework of SFLA, an improved frog-leap operator is designed with the effect of the global optimal information on the frog leaping and information exchange between frog individuals combined with genetic mutation with a small probability. Subsequently, in order to improve the convergence speed and enhance the exploitation ability, a novel CS model is proposed with considering the specific advantages of Lévy flights and frog-leap operator. Furthermore, the greedy transform method is used to repair the infeasible solution and optimize the feasible solution. Finally, numerical simulations are carried out on six different types of 0-1 knapsack instances, and the comparative results have shown the effectiveness of the proposed algorithm and its ability to achieve good quality solutions, which outperforms the binary cuckoo search, the binary differential evolution, and the genetic algorithm. PMID:25404940

  8. An effective hybrid cuckoo search algorithm with improved shuffled frog leaping algorithm for 0-1 knapsack problems.

    PubMed

    Feng, Yanhong; Wang, Gai-Ge; Feng, Qingjiang; Zhao, Xiang-Jun

    2014-01-01

    An effective hybrid cuckoo search algorithm (CS) with improved shuffled frog-leaping algorithm (ISFLA) is put forward for solving 0-1 knapsack problem. First of all, with the framework of SFLA, an improved frog-leap operator is designed with the effect of the global optimal information on the frog leaping and information exchange between frog individuals combined with genetic mutation with a small probability. Subsequently, in order to improve the convergence speed and enhance the exploitation ability, a novel CS model is proposed with considering the specific advantages of Lévy flights and frog-leap operator. Furthermore, the greedy transform method is used to repair the infeasible solution and optimize the feasible solution. Finally, numerical simulations are carried out on six different types of 0-1 knapsack instances, and the comparative results have shown the effectiveness of the proposed algorithm and its ability to achieve good quality solutions, which outperforms the binary cuckoo search, the binary differential evolution, and the genetic algorithm. PMID:25404940

  9. Bayesian fusion of algorithms for the robust estimation of respiratory rate from the photoplethysmogram.

    PubMed

    Zhu, Tingting; Pimentel, Marco A F; Clifford, Gari D; Clifton, David A

    2015-08-01

    Respiratory rate (RR) is a key vital sign that is monitored to assess the health of patients. With the increase of the availability of wearable devices, it is important that RR is extracted in a robust and noninvasive manner from the photoplethysmogram (PPG) acquired from pulse oximeters and similar devices. However, existing methods of noninvasive RR estimation suffer from a lack of robustness, resulting in the fact that they are not used in clinical practice. We propose a Bayesian approach to fusing the outputs of many RR estimation algorithms to improve the overall robustness of the resulting estimates. Our method estimates the accuracy of each algorithm and jointly infers the fused RR estimate in an unsupervised manner, with aim of producing a fused estimate that is more accurate than any of the algorithms taken individually. This approach is novel in the literature, where the latter has so far concentrated on attempting to produce single algorithms for RR estimation, without resulting in systems that have penetrated into clinical practice. A publicly-available dataset, Capnobase, was used to validate the performance of our proposed model. Our proposed methodology was compared to the best-performing individual algorithm from the literature, as well as to the results of using common fusing methodologies such as averaging, median, and maximum likelihood (ML). Our proposed methodology resulted in a mean-absolute-error (MAE) of 1.98 breaths per minute (bpm), outperformed other fusing strategies (mean fusion: 2.95 bpm; median fusion: 2.33 bpm; ML: 2.30 bpm). It also outperformed the best single algorithm (2.39 bpm) and the benchmark algorithm proposed for use with Capnobase (2.22 bpm). We conclude that the proposed fusion methodology can be used to combine RR estimates from multiple sources derived from the PPG, to infer a reliable and robust estimation of the respiratory rate in an unsupervised manner. PMID:26737693

  10. Optimizing Algorithm Choice for Metaproteomics: Comparing X!Tandem and Proteome Discoverer for Soil Proteomes

    NASA Astrophysics Data System (ADS)

    Diaz, K. S.; Kim, E. H.; Jones, R. M.; de Leon, K. C.; Woodcroft, B. J.; Tyson, G. W.; Rich, V. I.

    2014-12-01

    The growing field of metaproteomics links microbial communities to their expressed functions by using mass spectrometry methods to characterize community proteins. Comparison of mass spectrometry protein search algorithms and their biases is crucial for maximizing the quality and amount of protein identifications in mass spectral data. Available algorithms employ different approaches when mapping mass spectra to peptides against a database. We compared mass spectra from four microbial proteomes derived from high-organic content soils searched with two search algorithms: 1) Sequest HT as packaged within Proteome Discoverer (v.1.4) and 2) X!Tandem as packaged in TransProteomicPipeline (v.4.7.1). Searches used matched metagenomes, and results were filtered to allow identification of high probability proteins. There was little overlap in proteins identified by both algorithms, on average just ~24% of the total. However, when adjusted for spectral abundance, the overlap improved to ~70%. Proteome Discoverer generally outperformed X!Tandem, identifying an average of 12.5% more proteins than X!Tandem, with X!Tandem identifying more proteins only in the first two proteomes. For spectrally-adjusted results, the algorithms were similar, with X!Tandem marginally outperforming Proteome Discoverer by an average of ~4%. We then assessed differences in heat shock proteins (HSP) identification by the two algorithms by BLASTing identified proteins against the Heat Shock Protein Information Resource, because HSP hits typically account for the majority signal in proteomes, due to extraction protocols. Total HSP identifications for each of the 4 proteomes were approximately ~15%, ~11%, ~17%, and ~19%, with ~14% for total HSPs with redundancies removed. Of the ~15% average of proteins from the 4 proteomes identified as HSPs, ~10% of proteins and spectra were identified by both algorithms. On average, Proteome Discoverer identified ~9% more HSPs than X!Tandem.

  11. Reasoning about systolic algorithms

    SciTech Connect

    Purushothaman, S.; Subrahmanyam, P.A.

    1988-12-01

    The authors present a methodology for verifying correctness of systolic algorithms. The methodology is based on solving a set of Uniform Recurrence Equations obtained from a description of systolic algorithms as a set of recursive equations. They present an approach to mechanically verify correctness of systolic algorithms, using the Boyer-Moore theorem proven. A mechanical correctness proof of an example from the literature is also presented.

  12. Exploring a new best information algorithm for Iliad.

    PubMed Central

    Guo, D.; Lincoln, M. J.; Haug, P. J.; Turner, C. W.; Warner, H. R.

    1991-01-01

    Iliad is a diagnostic expert system for internal medicine. One important feature that Iliad offers is the ability to analyze a particular patient case and to determine the most cost-effective method for pursuing the work-up. Iliad's current "best information" algorithm has not been previously validated and compared to other potential algorithms. Therefore, this paper presents a comparison of four new algorithms to the current algorithm. The basis for this comparison was eighteen "vignette" cases derived from real patient cases from the University of Utah Medical Center. The results indicated that the current algorithm can be significantly improved. More promising algorithms are suggested for future investigation. PMID:1807677

  13. Selective paired ion contrast analysis: a novel algorithm for analyzing postprocessed LC-MS metabolomics data possessing high experimental noise.

    PubMed

    Mak, Tytus D; Laiakis, Evagelia C; Goudarzi, Maryam; Fornace, Albert J

    2015-03-17

    One of the consequences in analyzing biological data from noisy sources, such as human subjects, is the sheer variability of experimentally irrelevant factors that cannot be controlled for. This holds true especially in metabolomics, the global study of small molecules in a particular system. While metabolomics can offer deep quantitative insight into the metabolome via easy-to-acquire biofluid samples such as urine and blood, the aforementioned confounding factors can easily overwhelm attempts to extract relevant information. This can mar potentially crucial applications such as biomarker discovery. As such, a new algorithm, called Selective Paired Ion Contrast (SPICA), has been developed with the intent of extracting potentially biologically relevant information from the noisiest of metabolomic data sets. The basic idea of SPICA is built upon redefining the fundamental unit of statistical analysis. Whereas the vast majority of algorithms analyze metabolomics data on a single-ion basis, SPICA relies on analyzing ion-pairs. A standard metabolomic data set is reinterpreted by exhaustively considering all possible ion-pair combinations. Statistical comparisons between sample groups are made only by analyzing the differences in these pairs, which may be crucial in situations where no single metabolite can be used for normalization. With SPICA, human urine data sets from patients undergoing total body irradiation (TBI) and from a colorectal cancer (CRC) relapse study were analyzed in a statistically rigorous manner not possible with conventional methods. In the TBI study, 3530 statistically significant ion-pairs were identified, from which numerous putative radiation specific metabolite-pair biomarkers that mapped to potentially perturbed metabolic pathways were elucidated. In the CRC study, SPICA identified 6461 statistically significant ion-pairs, several of which putatively mapped to folic acid biosynthesis, a key pathway in colorectal cancer. Utilizing support

  14. A permutation based simulated annealing algorithm to predict pseudoknotted RNA secondary structures.

    PubMed

    Tsang, Herbert H; Wiese, Kay C

    2015-01-01

    Pseudoknots are RNA tertiary structures which perform essential biological functions. This paper discusses SARNA-Predict-pk, a RNA pseudoknotted secondary structure prediction algorithm based on Simulated Annealing (SA). The research presented here extends previous work of SARNA-Predict and further examines the effect of the new algorithm to include prediction of RNA secondary structure with pseudoknots. An evaluation of the performance of SARNA-Predict-pk in terms of prediction accuracy is made via comparison with several state-of-the-art prediction algorithms using 20 individual known structures from seven RNA classes. We measured the sensitivity and specificity of nine prediction algorithms. Three of these are dynamic programming algorithms: Pseudoknot (pknotsRE), NUPACK, and pknotsRG-mfe. One is using the statistical clustering approach: Sfold and the other five are heuristic algorithms: SARNA-Predict-pk, ILM, STAR, IPknot and HotKnots algorithms. The results presented in this paper demonstrate that SARNA-Predict-pk can out-perform other state-of-the-art algorithms in terms of prediction accuracy. This supports the use of the proposed method on pseudoknotted RNA secondary structure prediction of other known structures. PMID:26558299

  15. On the use of harmony search algorithm in the training of wavelet neural networks

    NASA Astrophysics Data System (ADS)

    Lai, Kee Huong; Zainuddin, Zarita; Ong, Pauline

    2015-10-01

    Wavelet neural networks (WNNs) are a class of feedforward neural networks that have been used in a wide range of industrial and engineering applications to model the complex relationships between the given inputs and outputs. The training of WNNs involves the configuration of the weight values between neurons. The backpropagation training algorithm, which is a gradient-descent method, can be used for this training purpose. Nonetheless, the solutions found by this algorithm often get trapped at local minima. In this paper, a harmony search-based algorithm is proposed for the training of WNNs. The training of WNNs, thus can be formulated as a continuous optimization problem, where the objective is to maximize the overall classification accuracy. Each candidate solution proposed by the harmony search algorithm represents a specific WNN architecture. In order to speed up the training process, the solution space is divided into disjoint partitions during the random initialization step of harmony search algorithm. The proposed training algorithm is tested onthree benchmark problems from the UCI machine learning repository, as well as one real life application, namely, the classification of electroencephalography signals in the task of epileptic seizure detection. The results obtained show that the proposed algorithm outperforms the traditional harmony search algorithm in terms of overall classification accuracy.

  16. Evaluating and comparing algorithms for respiratory motion prediction.

    PubMed

    Ernst, F; Dürichen, R; Schlaefer, A; Schweikard, A

    2013-06-01

    In robotic radiosurgery, it is necessary to compensate for systematic latencies arising from target tracking and mechanical constraints. This compensation is usually achieved by means of an algorithm which computes the future target position. In most scientific works on respiratory motion prediction, only one or two algorithms are evaluated on a limited amount of very short motion traces. The purpose of this work is to gain more insight into the real world capabilities of respiratory motion prediction methods by evaluating many algorithms on an unprecedented amount of data. We have evaluated six algorithms, the normalized least mean squares (nLMS), recursive least squares (RLS), multi-step linear methods (MULIN), wavelet-based multiscale autoregression (wLMS), extended Kalman filtering, and ε-support vector regression (SVRpred) methods, on an extensive database of 304 respiratory motion traces. The traces were collected during treatment with the CyberKnife (Accuray, Inc., Sunnyvale, CA, USA) and feature an average length of 71 min. Evaluation was done using a graphical prediction toolkit, which is available to the general public, as is the data we used. The experiments show that the nLMS algorithm-which is one of the algorithms currently used in the CyberKnife-is outperformed by all other methods. This is especially true in the case of the wLMS, the SVRpred, and the MULIN algorithms, which perform much better. The nLMS algorithm produces a relative root mean square (RMS) error of 75% or less (i.e., a reduction in error of 25% or more when compared to not doing prediction) in only 38% of the test cases, whereas the MULIN and SVRpred methods reach this level in more than 77%, the wLMS algorithm in more than 84% of the test cases. Our work shows that the wLMS algorithm is the most accurate algorithm and does not require parameter tuning, making it an ideal candidate for clinical implementation. Additionally, we have seen that the structure of a patient's respiratory

  17. Adaptive color image watermarking algorithm

    NASA Astrophysics Data System (ADS)

    Feng, Gui; Lin, Qiwei

    2008-03-01

    As a major method for intellectual property right protecting, digital watermarking techniques have been widely studied and used. But due to the problems of data amount and color shifted, watermarking techniques on color image was not so widespread studied, although the color image is the principal part for multi-medium usages. Considering the characteristic of Human Visual System (HVS), an adaptive color image watermarking algorithm is proposed in this paper. In this algorithm, HSI color model was adopted both for host and watermark image, the DCT coefficient of intensity component (I) of the host color image was used for watermark date embedding, and while embedding watermark the amount of embedding bit was adaptively changed with the complex degree of the host image. As to the watermark image, preprocessing is applied first, in which the watermark image is decomposed by two layer wavelet transformations. At the same time, for enhancing anti-attack ability and security of the watermarking algorithm, the watermark image was scrambled. According to its significance, some watermark bits were selected and some watermark bits were deleted as to form the actual embedding data. The experimental results show that the proposed watermarking algorithm is robust to several common attacks, and has good perceptual quality at the same time.

  18. Algorithm That Synthesizes Other Algorithms for Hashing

    NASA Technical Reports Server (NTRS)

    James, Mark

    2010-01-01

    An algorithm that includes a collection of several subalgorithms has been devised as a means of synthesizing still other algorithms (which could include computer code) that utilize hashing to determine whether an element (typically, a number or other datum) is a member of a set (typically, a list of numbers). Each subalgorithm synthesizes an algorithm (e.g., a block of code) that maps a static set of key hashes to a somewhat linear monotonically increasing sequence of integers. The goal in formulating this mapping is to cause the length of the sequence thus generated to be as close as practicable to the original length of the set and thus to minimize gaps between the elements. The advantage of the approach embodied in this algorithm is that it completely avoids the traditional approach of hash-key look-ups that involve either secondary hash generation and look-up or further searching of a hash table for a desired key in the event of collisions. This algorithm guarantees that it will never be necessary to perform a search or to generate a secondary key in order to determine whether an element is a member of a set. This algorithm further guarantees that any algorithm that it synthesizes can be executed in constant time. To enforce these guarantees, the subalgorithms are formulated to employ a set of techniques, each of which works very effectively covering a certain class of hash-key values. These subalgorithms are of two types, summarized as follows: Given a list of numbers, try to find one or more solutions in which, if each number is shifted to the right by a constant number of bits and then masked with a rotating mask that isolates a set of bits, a unique number is thereby generated. In a variant of the foregoing procedure, omit the masking. Try various combinations of shifting, masking, and/or offsets until the solutions are found. From the set of solutions, select the one that provides the greatest compression for the representation and is executable in the

  19. Basic firefly algorithm for document clustering

    NASA Astrophysics Data System (ADS)

    Mohammed, Athraa Jasim; Yusof, Yuhanis; Husni, Husniza

    2015-12-01

    The Document clustering plays significant role in Information Retrieval (IR) where it organizes documents prior to the retrieval process. To date, various clustering algorithms have been proposed and this includes the K-means and Particle Swarm Optimization. Even though these algorithms have been widely applied in many disciplines due to its simplicity, such an approach tends to be trapped in a local minimum during its search for an optimal solution. To address the shortcoming, this paper proposes a Basic Firefly (Basic FA) algorithm to cluster text documents. The algorithm employs the Average Distance to Document Centroid (ADDC) as the objective function of the search. Experiments utilizing the proposed algorithm were conducted on the 20Newsgroups benchmark dataset. Results demonstrate that the Basic FA generates a more robust and compact clusters than the ones produced by K-means and Particle Swarm Optimization (PSO).

  20. Acceleration of iterative image restoration algorithms.

    PubMed

    Biggs, D S; Andrews, M

    1997-03-10

    A new technique for the acceleration of iterative image restoration algorithms is proposed. The method is based on the principles of vector extrapolation and does not require the minimization of a cost function. The algorithm is derived and its performance illustrated with Richardson-Lucy (R-L) and maximum entropy (ME) deconvolution algorithms and the Gerchberg-Saxton magnitude and phase retrieval algorithms. Considerable reduction in restoration times is achieved with little image distortion or computational overhead per iteration. The speedup achieved is shown to increase with the number of iterations performed and is easily adapted to suit different algorithms. An example R-L restoration achieves an average speedup of 40 times after 250 iterations and an ME method 20 times after only 50 iterations. An expression for estimating the acceleration factor is derived and confirmed experimentally. Comparisons with other acceleration techniques in the literature reveal significant improvements in speed and stability. PMID:18250863

  1. Passive microwave algorithm development and evaluation

    NASA Technical Reports Server (NTRS)

    Petty, Grant W.

    1995-01-01

    The scientific objectives of this grant are: (1) thoroughly evaluate, both theoretically and empirically, all available Special Sensor Microwave Imager (SSM/I) retrieval algorithms for column water vapor, column liquid water, and surface wind speed; (2) where both appropriate and feasible, develop, validate, and document satellite passive microwave retrieval algorithms that offer significantly improved performance compared with currently available algorithms; and (3) refine and validate a novel physical inversion scheme for retrieving rain rate over the ocean. This report summarizes work accomplished or in progress during the first year of a three year grant. The emphasis during the first year has been on the validation and refinement of the rain rate algorithm published by Petty and on the analysis of independent data sets that can be used to help evaluate the performance of rain rate algorithms over remote areas of the ocean. Two articles in the area of global oceanic precipitation are attached.

  2. Algorithms for improved performance in cryptographic protocols.

    SciTech Connect

    Schroeppel, Richard Crabtree; Beaver, Cheryl Lynn

    2003-11-01

    Public key cryptographic algorithms provide data authentication and non-repudiation for electronic transmissions. The mathematical nature of the algorithms, however, means they require a significant amount of computation, and encrypted messages and digital signatures possess high bandwidth. Accordingly, there are many environments (e.g. wireless, ad-hoc, remote sensing networks) where public-key requirements are prohibitive and cannot be used. The use of elliptic curves in public-key computations has provided a means by which computations and bandwidth can be somewhat reduced. We report here on the research conducted in an LDRD aimed to find even more efficient algorithms and to make public-key cryptography available to a wider range of computing environments. We improved upon several algorithms, including one for which a patent has been applied. Further we discovered some new problems and relations on which future cryptographic algorithms may be based.

  3. Genotyping NAT2 with only two SNPs (rs1041983 and rs1801280) outperforms the tagging SNP rs1495741 and is equivalent to the conventional 7-SNP NAT2 genotype.

    PubMed

    Selinski, Silvia; Blaszkewicz, Meinolf; Lehmann, Marie-Louise; Ovsiannikov, Daniel; Moormann, Oliver; Guballa, Christoph; Kress, Alexander; Truss, Michael C; Gerullis, Holger; Otto, Thomas; Barski, Dimitri; Niegisch, Günter; Albers, Peter; Frees, Sebastian; Brenner, Walburgis; Thüroff, Joachim W; Angeli-Greaves, Miriam; Seidel, Thilo; Roth, Gerhard; Dietrich, Holger; Ebbinghaus, Rainer; Prager, Hans M; Bolt, Hermann M; Falkenstein, Michael; Zimmermann, Anna; Klein, Torsten; Reckwitz, Thomas; Roemer, Hermann C; Löhlein, Dietrich; Weistenhöfer, Wobbeke; Schöps, Wolfgang; Hassan Rizvi, Syed Adibul; Aslam, Muhammad; Bánfi, Gergely; Romics, Imre; Steffens, Michael; Ekici, Arif B; Winterpacht, Andreas; Ickstadt, Katja; Schwender, Holger; Hengstler, Jan G; Golka, Klaus

    2011-10-01

    Genotyping N-acetyltransferase 2 (NAT2) is of high relevance for individualized dosing of antituberculosis drugs and bladder cancer epidemiology. In this study we compared a recently published tagging single nucleotide polymorphism (SNP) (rs1495741) to the conventional 7-SNP genotype (G191A, C282T, T341C, C481T, G590A, A803G and G857A haplotype pairs) and systematically analysed if novel SNP combinations outperform the latter. For this purpose, we studied 3177 individuals by PCR and phenotyped 344 individuals by the caffeine test. Although the tagSNP and the 7-SNP genotype showed a high degree of correlation (R=0.933, P<0.0001) the 7-SNP genotype nevertheless outperformed the tagging SNP with respect to specificity (1.0 vs. 0.9444, P=0.0065). Considering all possible SNP combinations in a receiver operating characteristic analysis we identified a 2-SNP genotype (C282T, T341C) that outperformed the tagging SNP and was equivalent to the 7-SNP genotype. The 2-SNP genotype predicted the correct phenotype with a sensitivity of 0.8643 and a specificity of 1.0. In addition, it predicted the 7-SNP genotype with sensitivity and specificity of 0.9993 and 0.9880, respectively. The prediction of the NAT2 genotype by the 2-SNP genotype performed similar in populations of Caucasian, Venezuelan and Pakistani background. A 2-SNP genotype predicts NAT2 phenotypes with similar sensitivity and specificity as the conventional 7-SNP genotype. This procedure represents a facilitation in individualized dosing of NAT2 substrates without losing sensitivity or specificity. PMID:21750470

  4. DELTA: A Distal Enhancer Locating Tool Based on AdaBoost Algorithm and Shape Features of Chromatin Modifications

    PubMed Central

    Lu, Yiming; Qu, Wubin; Shan, Guangyu; Zhang, Chenggang

    2015-01-01

    Accurate identification of DNA regulatory elements becomes an urgent need in the post-genomic era. Recent genome-wide chromatin states mapping efforts revealed that DNA elements are associated with characteristic chromatin modification signatures, based on which several approaches have been developed to predict transcriptional enhancers. However, their practical application is limited by incomplete extraction of chromatin features and model inconsistency for predicting enhancers across different cell types. To address these issues, we define a set of non-redundant shape features of histone modifications, which shows high consistency across cell types and can greatly reduce the dimensionality of feature vectors. Integrating shape features with a machine-learning algorithm AdaBoost, we developed an enhancer predicting method, DELTA (Distal Enhancer Locating Tool based on AdaBoost). We show that DELTA significantly outperforms current enhancer prediction methods in prediction accuracy on different datasets and can predict enhancers in one cell type using models trained in other cell types without loss of accuracy. Overall, our study presents a novel framework for accurately identifying enhancers from epigenetic data across multiple cell types. PMID:26091399

  5. A fast non-local image denoising algorithm

    NASA Astrophysics Data System (ADS)

    Dauwe, A.; Goossens, B.; Luong, H. Q.; Philips, W.

    2008-02-01

    In this paper we propose several improvements to the original non-local means algorithm introduced by Buades et al. which obtains state-of-the-art denoising results. The strength of this algorithm is to exploit the repetitive character of the image in order to denoise the image unlike conventional denoising algorithms, which typically operate in a local neighbourhood. Due to the enormous amount of weight computations, the original algorithm has a high computational cost. An improvement of image quality towards the original algorithm is to ignore the contributions from dissimilar windows. Even though their weights are very small at first sight, the new estimated pixel value can be severely biased due to the many small contributions. This bad influence of dissimilar windows can be eliminated by setting their corresponding weights to zero. Using the preclassification based on the first three statistical moments, only contributions from similar neighborhoods are computed. To decide whether a window is similar or dissimilar, we will derive thresholds for images corrupted with additive white Gaussian noise. Our accelerated approach is further optimized by taking advantage of the symmetry in the weights, which roughly halves the computation time, and by using a lookup table to speed up the weight computations. Compared to the original algorithm, our proposed method produces images with increased PSNR and better visual performance in less computation time. Our proposed method even outperforms state-of-the-art wavelet denoising techniques in both visual quality and PSNR values for images containing a lot of repetitive structures such as textures: the denoised images are much sharper and contain less artifacts. The proposed optimizations can also be applied in other image processing tasks which employ the concept of repetitive structures such as intra-frame super-resolution or detection of digital image forgery.

  6. Parallel scheduling algorithms

    SciTech Connect

    Dekel, E.; Sahni, S.

    1983-01-01

    Parallel algorithms are given for scheduling problems such as scheduling to minimize the number of tardy jobs, job sequencing with deadlines, scheduling to minimize earliness and tardiness penalties, channel assignment, and minimizing the mean finish time. The shared memory model of parallel computers is used to obtain fast algorithms. 26 references.

  7. Developmental Algorithms Have Meaning!

    ERIC Educational Resources Information Center

    Green, John

    1997-01-01

    Adapts Stanic and McKillip's ideas for the use of developmental algorithms to propose that the present emphasis on symbolic manipulation should be tempered with an emphasis on the conceptual understanding of the mathematics underlying the algorithm. Uses examples from the areas of numeric computation, algebraic manipulation, and equation solving…

  8. A novel waveband routing algorithm in hierarchical WDM optical networks

    NASA Astrophysics Data System (ADS)

    Huang, Jun; Guo, Xiaojin; Qiu, Shaofeng; Luo, Jiangtao; Zhang, Zhizhong

    2007-11-01

    Hybrid waveband/wavelength switching in intelligent optical networks is gaining more and more academic attention. It is very challenging to develop efficient algorithms to efficiently use waveband switching capability. In this paper, we propose a novel cross-layer routing algorithm, waveband layered graph routing algorithm (WBLGR), in waveband switching-enabled optical networks. Through extensive simulation WBLGR algorithm can significantly improve the performance in terms of reduced call blocking probability.

  9. Visual tracking method based on cuckoo search algorithm

    NASA Astrophysics Data System (ADS)

    Gao, Ming-Liang; Yin, Li-Ju; Zou, Guo-Feng; Li, Hai-Tao; Liu, Wei

    2015-07-01

    Cuckoo search (CS) is a new meta-heuristic optimization algorithm that is based on the obligate brood parasitic behavior of some cuckoo species in combination with the Lévy flight behavior of some birds and fruit flies. It has been found to be efficient in solving global optimization problems. An application of CS is presented to solve the visual tracking problem. The relationship between optimization and visual tracking is comparatively studied and the parameters' sensitivity and adjustment of CS in the tracking system are experimentally studied. To demonstrate the tracking ability of a CS-based tracker, a comparative study of tracking accuracy and speed of the CS-based tracker with six "state-of-art" trackers, namely, particle filter, meanshift, PSO, ensemble tracker, fragments tracker, and compressive tracker are presented. Comparative results show that the CS-based tracker outperforms the other trackers.

  10. Efficiency of tabu-search-based conformational search algorithms.

    PubMed

    Grebner, Christoph; Becker, Johannes; Stepanenko, Svetlana; Engels, Bernd

    2011-07-30

    Efficient conformational search or sampling approaches play an integral role in molecular modeling, leading to a strong demand for even faster and more reliable conformer search algorithms. This article compares the efficiency of a molecular dynamics method, a simulated annealing method, and the basin hopping (BH) approach (which are widely used in this field) with a previously suggested tabu-search-based approach called gradient only tabu search (GOTS). The study emphasizes the success of the GOTS procedure and, more importantly, shows that an approach which combines BH and GOTS outperforms the single methods in efficiency and speed. We also show that ring structures built by a hydrogen bond are useful as starting points for conformational search investigations of peptides and organic ligands with biological activities, especially in structures that contain multiple rings. PMID:21541959

  11. Multiobjective Optimization of Rocket Engine Pumps Using Evolutionary Algorithm

    NASA Technical Reports Server (NTRS)

    Oyama, Akira; Liou, Meng-Sing

    2001-01-01

    A design optimization method for turbopumps of cryogenic rocket engines has been developed. Multiobjective Evolutionary Algorithm (MOEA) is used for multiobjective pump design optimizations. Performances of design candidates are evaluated by using the meanline pump flow modeling method based on the Euler turbine equation coupled with empirical correlations for rotor efficiency. To demonstrate the feasibility of the present approach, a single stage centrifugal pump design and multistage pump design optimizations are presented. In both cases, the present method obtains very reasonable Pareto-optimal solutions that include some designs outperforming the original design in total head while reducing input power by one percent. Detailed observation of the design results also reveals some important design criteria for turbopumps in cryogenic rocket engines. These results demonstrate the feasibility of the EA-based design optimization method in this field.

  12. Optimized Laplacian image sharpening algorithm based on graphic processing unit

    NASA Astrophysics Data System (ADS)

    Ma, Tinghuai; Li, Lu; Ji, Sai; Wang, Xin; Tian, Yuan; Al-Dhelaan, Abdullah; Al-Rodhaan, Mznah

    2014-12-01

    In classical Laplacian image sharpening, all pixels are processed one by one, which leads to large amount of computation. Traditional Laplacian sharpening processed on CPU is considerably time-consuming especially for those large pictures. In this paper, we propose a parallel implementation of Laplacian sharpening based on Compute Unified Device Architecture (CUDA), which is a computing platform of Graphic Processing Units (GPU), and analyze the impact of picture size on performance and the relationship between the processing time of between data transfer time and parallel computing time. Further, according to different features of different memory, an improved scheme of our method is developed, which exploits shared memory in GPU instead of global memory and further increases the efficiency. Experimental results prove that two novel algorithms outperform traditional consequentially method based on OpenCV in the aspect of computing speed.

  13. A new machine learning algorithm for removal of salt and pepper noise

    NASA Astrophysics Data System (ADS)

    Wang, Yi; Adhami, Reza; Fu, Jian

    2015-07-01

    Supervised machine learning algorithm has been extensively studied and applied to different fields of image processing in past decades. This paper proposes a new machine learning algorithm, called margin setting (MS), for restoring images that are corrupted by salt and pepper impulse noise. Margin setting generates decision surface to classify the noise pixels and non-noise pixels. After the noise pixels are detected, a modified ranked order mean (ROM) filter is used to replace the corrupted pixels for images reconstruction. Margin setting algorithm is tested with grayscale and color images for different noise densities. The experimental results are compared with those of the support vector machine (SVM) and standard median filter (SMF). The results show that margin setting outperforms these methods with higher Peak Signal-to-Noise Ratio (PSNR), lower mean square error (MSE), higher image enhancement factor (IEF) and higher Structural Similarity Index (SSIM).

  14. A consensus algorithm for approximate string matching and its application to QRS complex detection

    NASA Astrophysics Data System (ADS)

    Alba, Alfonso; Mendez, Martin O.; Rubio-Rincon, Miguel E.; Arce-Santana, Edgar R.

    2016-08-01

    In this paper, a novel algorithm for approximate string matching (ASM) is proposed. The novelty resides in the fact that, unlike most other methods, the proposed algorithm is not based on the Hamming or Levenshtein distances, but instead computes a score for each symbol in the search text based on a consensus measure. Those symbols with sufficiently high scores will likely correspond to approximate instances of the pattern string. To demonstrate the usefulness of the proposed method, it has been applied to the detection of QRS complexes in electrocardiographic signals with competitive results when compared against the classic Pan-Tompkins (PT) algorithm. The proposed method outperformed PT in 72% of the test cases, with no extra computational cost.

  15. Rain detection and removal algorithm using motion-compensated non-local mean filter

    NASA Astrophysics Data System (ADS)

    Song, B. C.; Seo, S. J.

    2015-03-01

    This paper proposed a novel rain detection and removal algorithm robust against camera motions. It is very difficult to detect and remove rain in video with camera motion. So, most previous works assume that camera is fixed. However, these methods are not useful for application. The proposed algorithm initially detects possible rain streaks by using spatial properties such as luminance and structure of rain streaks. Then, the rain streak candidates are selected based on Gaussian distribution model. Next, a non-rain block matching algorithm is performed between adjacent frames to find similar blocks to each including rain pixels. If the similar blocks to the block are obtained, the rain region of the block is reconstructed by non-local mean (NLM) filtering using the similar neighbors. Experimental results show that the proposed method outperforms previous works in terms of objective and subjective visual quality.

  16. Insomnia brings soldiers into mental health treatment, predicts treatment engagement, and outperforms other suicide-related symptoms as a predictor of major depressive episodes.

    PubMed

    Hom, Melanie A; Lim, Ingrid C; Stanley, Ian H; Chiurliza, Bruno; Podlogar, Matthew C; Michaels, Matthew S; Buchman-Schmitt, Jennifer M; Silva, Caroline; Ribeiro, Jessica D; Joiner, Thomas E

    2016-08-01

    Given the high rates of suicide among military personnel and the need to characterize suicide risk factors associated with mental health service use, this study aimed to identify suicide-relevant factors that predict: (1) treatment engagement and treatment adherence, and (2) suicide attempts, suicidal ideation, and major depressive episodes in a military sample. Army recruiters (N = 2596) completed a battery of self-report measures upon study enrollment. Eighteen months later, information regarding suicide attempts, suicidal ideation, major depressive episodes, and mental health visits were obtained from participants' military medical records. Suicide attempts and suicidal ideation were very rare in this sample; negative binomial regression analyses with robust estimation were used to assess correlates and predictors of mental health treatment visits and major depressive episodes. More severe insomnia and agitation were significantly associated with mental health visits at baseline and over the 18-month study period. In contrast, suicide-specific hopelessness was significantly associated with fewer mental health visits. Insomnia severity was the only significant predictor of major depressive episodes. Findings suggest that assessment of sleep problems might be useful in identifying at-risk military service members who may engage in mental health treatment. Additional research is warranted to examine the predictive validity of these suicide-related symptom measures in a more representative, higher suicide risk military sample. PMID:27218816

  17. Social significance of community structure: Statistical view

    NASA Astrophysics Data System (ADS)

    Li, Hui-Jia; Daniels, Jasmine J.

    2015-01-01

    Community structure analysis is a powerful tool for social networks that can simplify their topological and functional analysis considerably. However, since community detection methods have random factors and real social networks obtained from complex systems always contain error edges, evaluating the significance of a partitioned community structure is an urgent and important question. In this paper, integrating the specific characteristics of real society, we present a framework to analyze the significance of a social community. The dynamics of social interactions are modeled by identifying social leaders and corresponding hierarchical structures. Instead of a direct comparison with the average outcome of a random model, we compute the similarity of a given node with the leader by the number of common neighbors. To determine the membership vector, an efficient community detection algorithm is proposed based on the position of the nodes and their corresponding leaders. Then, using a log-likelihood score, the tightness of the community can be derived. Based on the distribution of community tightness, we establish a connection between p -value theory and network analysis, and then we obtain a significance measure of statistical form . Finally, the framework is applied to both benchmark networks and real social networks. Experimental results show that our work can be used in many fields, such as determining the optimal number of communities, analyzing the social significance of a given community, comparing the performance among various algorithms, etc.

  18. Social significance of community structure: statistical view.

    PubMed

    Li, Hui-Jia; Daniels, Jasmine J

    2015-01-01

    Community structure analysis is a powerful tool for social networks that can simplify their topological and functional analysis considerably. However, since community detection methods have random factors and real social networks obtained from complex systems always contain error edges, evaluating the significance of a partitioned community structure is an urgent and important question. In this paper, integrating the specific characteristics of real society, we present a framework to analyze the significance of a social community. The dynamics of social interactions are modeled by identifying social leaders and corresponding hierarchical structures. Instead of a direct comparison with the average outcome of a random model, we compute the similarity of a given node with the leader by the number of common neighbors. To determine the membership vector, an efficient community detection algorithm is proposed based on the position of the nodes and their corresponding leaders. Then, using a log-likelihood score, the tightness of the community can be derived. Based on the distribution of community tightness, we establish a connection between p-value theory and network analysis, and then we obtain a significance measure of statistical form . Finally, the framework is applied to both benchmark networks and real social networks. Experimental results show that our work can be used in many fields, such as determining the optimal number of communities, analyzing the social significance of a given community, comparing the performance among various algorithms, etc. PMID:25679651

  19. Performance analysis of cone detection algorithms.

    PubMed

    Mariotti, Letizia; Devaney, Nicholas

    2015-04-01

    Many algorithms have been proposed to help clinicians evaluate cone density and spacing, as these may be related to the onset of retinal diseases. However, there has been no rigorous comparison of the performance of these algorithms. In addition, the performance of such algorithms is typically determined by comparison with human observers. Here we propose a technique to simulate realistic images of the cone mosaic. We use the simulated images to test the performance of three popular cone detection algorithms, and we introduce an algorithm which is used by astronomers to detect stars in astronomical images. We use Free Response Operating Characteristic (FROC) curves to evaluate and compare the performance of the four algorithms. This allows us to optimize the performance of each algorithm. We observe that performance is significantly enhanced by up-sampling the images. We investigate the effect of noise and image quality on cone mosaic parameters estimated using the different algorithms, finding that the estimated regularity is the most sensitive parameter. PMID:26366758

  20. Faster Algorithms on Branch and Clique Decompositions

    NASA Astrophysics Data System (ADS)

    Bodlaender, Hans L.; van Leeuwen, Erik Jan; van Rooij, Johan M. M.; Vatshelle, Martin

    We combine two techniques recently introduced to obtain faster dynamic programming algorithms for optimization problems on graph decompositions. The unification of generalized fast subset convolution and fast matrix multiplication yields significant improvements to the running time of previous algorithms for several optimization problems. As an example, we give an O^{*}(3^{ω/2k}) time algorithm for Minimum Dominating Set on graphs of branchwidth k, improving on the previous O *(4 k ) algorithm. Here ω is the exponent in the running time of the best matrix multiplication algorithm (currently ω< 2.376). For graphs of cliquewidth k, we improve from O *(8 k ) to O *(4 k ). We also obtain an algorithm for counting the number of perfect matchings of a graph, given a branch decomposition of width k, that runs in time O^{*}(2^{ω/2k}). Generalizing these approaches, we obtain faster algorithms for all so-called [ρ,σ]-domination problems on branch decompositions if ρ and σ are finite or cofinite. The algorithms presented in this paper either attain or are very close to natural lower bounds for these problems.

  1. FRESH-FRI-Based Single-Image Super-Resolution Algorithm.

    PubMed

    Wei, Xiaoyao; Dragotti, Pier Luigi

    2016-08-01

    In this paper, we consider the problem of single image super-resolution and propose a novel algorithm that outperforms state-of-the-art methods without the need of learning patches pairs from external data sets. We achieve this by modeling images and, more precisely, lines of images as piecewise smooth functions and propose a resolution enhancement method for this type of functions. The method makes use of the theory of sampling signals with finite rate of innovation (FRI) and combines it with traditional linear reconstruction methods. We combine the two reconstructions by leveraging from the multi-resolution analysis in wavelet theory and show how an FRI reconstruction and a linear reconstruction can be fused using filter banks. We then apply this method along vertical, horizontal, and diagonal directions in an image to obtain a single-image super-resolution algorithm. We also propose a further improvement of the method based on learning from the errors of our super-resolution result at lower resolution levels. Simulation results show that our method outperforms state-of-the-art algorithms under different blurring kernels. PMID:27168595

  2. A tabu search evalutionary algorithm for multiobjective optimization: Application to a bi-criterion aircraft structural reliability problem

    NASA Astrophysics Data System (ADS)

    Long, Kim Chenming

    application of the proposed algorithm, TSEA, with several state-of-the-art multiobjective optimization algorithms reveals that TSEA outperforms these algorithms by providing retrofit solutions with greater reliability for the same costs (i.e., closer to the Pareto-optimal front) after the algorithms are executed for the same number of generations. This research also demonstrates that TSEA competes with and, in some situations, outperforms state-of-the-art multiobjective optimization algorithms such as NSGA II and SPEA 2 when applied to classic bicriteria test problems in the technical literature and other complex, sizable real-world applications. The successful implementation of TSEA contributes to the safety of aeronautical structures by providing a systematic way to guide aircraft structural retrofitting efforts, as well as a potentially useful algorithm for a wide range of multiobjective optimization problems in engineering and other fields.

  3. Heat-treated (in single aliquot or batch) colostrum outperforms non-heat-treated colostrum in terms of quality and transfer of immunoglobulin G in neonatal Jersey calves.

    PubMed

    Kryzer, A A; Godden, S M; Schell, R

    2015-03-01

    The objective of this randomized clinical trial was to describe the effect on colostrum characteristics and passive transfer of IgG in neonatal calves when using the Perfect Udder colostrum management system (single-aliquot treatment; Dairy Tech Inc., Greeley, CO) compared with a negative control (fresh refrigerated or fresh frozen colostrum) and a positive control (batch heat-treated colostrum). First-milking Jersey colostrum was pooled to achieve 31 unique batches with a minimum of 22.8 L per batch. The batch was then divided into 4 with 3.8 L allocated to each treatment group: (1) heat-treated in Perfect Udder bag at 60°C for 60 min and then stored at -20°C (PU); (2) heat-treated in a batch pasteurizer (Dairy Tech Inc.) at 60°C for 60 min and then stored at -20°C in Perfect Udder bag (DTB; positive control); (3) fresh frozen colostrum stored at -20°C in Perfect Udder bag (FF; negative control); and (4) fresh refrigerated colostrum stored at 4°C in Perfect Udder bag (FR; negative control). Colostrum from all treatments was sampled for analysis of IgG concentration and bacterial culture immediately after batch assembly, after processing, and before feeding. Newborn Jersey calves were randomly assigned to be fed 3.8 L of colostrum from 1 of the 4 treatment groups. A prefeeding, 0-h blood sample was collected, calves were fed by esophageal tube within 2 h of birth, and then a 24-h postfeeding blood sample was collected. Paired serum samples from 0- and 24-h blood samples were analyzed for IgG concentration (mg/mL) using radial immunodiffusion analysis. The overall mean IgG concentration in colostrum was 77.9 g/L and was not affected by treatment. Prefeeding total plate counts (log10 cfu/mL) were significantly different for all 4 treatments and were lower for heat-treated colostrum (PU=4.23, DTB=3.63) compared with fresh colostrum (FF=5.68, FR=6.53). Total coliform counts (log10 cfu/mL) were also significantly different for all 4 treatments and were lower for

  4. Advanced Imaging Algorithms for Radiation Imaging Systems

    SciTech Connect

    Marleau, Peter

    2015-10-01

    The intent of the proposed work, in collaboration with University of Michigan, is to develop the algorithms that will bring the analysis from qualitative images to quantitative attributes of objects containing SNM. The first step to achieving this is to develop an indepth understanding of the intrinsic errors associated with the deconvolution and MLEM algorithms. A significant new effort will be undertaken to relate the image data to a posited three-dimensional model of geometric primitives that can be adjusted to get the best fit. In this way, parameters of the model such as sizes, shapes, and masses can be extracted for both radioactive and non-radioactive materials. This model-based algorithm will need the integrated response of a hypothesized configuration of material to be calculated many times. As such, both the MLEM and the model-based algorithm require significant increases in calculation speed in order to converge to solutions in practical amounts of time.

  5. Fast decoding algorithms for coded aperture systems

    NASA Astrophysics Data System (ADS)

    Byard, Kevin

    2014-08-01

    Fast decoding algorithms are described for a number of established coded aperture systems. The fast decoding algorithms for all these systems offer significant reductions in the number of calculations required when reconstructing images formed by a coded aperture system and hence require less computation time to produce the images. The algorithms may therefore be of use in applications that require fast image reconstruction, such as near real-time nuclear medicine and location of hazardous radioactive spillage. Experimental tests confirm the efficacy of the fast decoding techniques.

  6. Algorithms for optimal dyadic decision trees

    SciTech Connect

    Hush, Don; Porter, Reid

    2009-01-01

    A new algorithm for constructing optimal dyadic decision trees was recently introduced, analyzed, and shown to be very effective for low dimensional data sets. This paper enhances and extends this algorithm by: introducing an adaptive grid search for the regularization parameter that guarantees optimal solutions for all relevant trees sizes, revising the core tree-building algorithm so that its run time is substantially smaller for most regularization parameter values on the grid, and incorporating new data structures and data pre-processing steps that provide significant run time enhancement in practice.

  7. Enhancing the synchronizability of networks by rewiring based on tabu search and a local greedy algorithm

    NASA Astrophysics Data System (ADS)

    Yang, Cui-Li; Tang, Kit-Sang

    2011-12-01

    By considering the eigenratio of the Laplacian matrix as the synchronizability measure, this paper presents an efficient method to enhance the synchronizability of undirected and unweighted networks via rewiring. The rewiring method combines the use of tabu search and a local greedy algorithm so that an effective search of solutions can be achieved. As demonstrated in the simulation results, the performance of the proposed approach outperforms the existing methods for a large variety of initial networks, both in terms of speed and quality of solutions.

  8. A Heuristic Approach Based on Clarke-Wright Algorithm for Open Vehicle Routing Problem

    PubMed Central

    2013-01-01

    We propose a heuristic approach based on the Clarke-Wright algorithm (CW) to solve the open version of the well-known capacitated vehicle routing problem in which vehicles are not required to return to the depot after completing service. The proposed CW has been presented in four procedures composed of Clarke-Wright formula modification, open-route construction, two-phase selection, and route postimprovement. Computational results show that the proposed CW is competitive and outperforms classical CW in all directions. Moreover, the best known solution is also obtained in 97% of tested instances (60 out of 62). PMID:24382948

  9. Self-organization and clustering algorithms

    NASA Technical Reports Server (NTRS)

    Bezdek, James C.

    1991-01-01

    Kohonen's feature maps approach to clustering is often likened to the k or c-means clustering algorithms. Here, the author identifies some similarities and differences between the hard and fuzzy c-Means (HCM/FCM) or ISODATA algorithms and Kohonen's self-organizing approach. The author concludes that some differences are significant, but at the same time there may be some important unknown relationships between the two methodologies. Several avenues of research are proposed.

  10. An Iterative Soft-Decision Decoding Algorithm

    NASA Technical Reports Server (NTRS)

    Lin, Shu; Koumoto, Takuya; Takata, Toyoo; Kasami, Tadao

    1996-01-01

    This paper presents a new minimum-weight trellis-based soft-decision iterative decoding algorithm for binary linear block codes. Simulation results for the RM(64,22), EBCH(64,24), RM(64,42) and EBCH(64,45) codes show that the proposed decoding algorithm achieves practically (or near) optimal error performance with significant reduction in decoding computational complexity. The average number of search iterations is also small even for low signal-to-noise ratio.

  11. Image watermarking using a dynamically weighted fuzzy c-means algorithm

    NASA Astrophysics Data System (ADS)

    Kang, Myeongsu; Ho, Linh Tran; Kim, Yongmin; Kim, Cheol Hong; Kim, Jong-Myon

    2011-10-01

    Digital watermarking has received extensive attention as a new method of protecting multimedia content from unauthorized copying. In this paper, we present a nonblind watermarking system using a proposed dynamically weighted fuzzy c-means (DWFCM) technique combined with discrete wavelet transform (DWT), discrete cosine transform (DCT), and singular value decomposition (SVD) techniques for copyright protection. The proposed scheme efficiently selects blocks in which the watermark is embedded using new membership values of DWFCM as the embedding strength. We evaluated the proposed algorithm in terms of robustness against various watermarking attacks and imperceptibility compared to other algorithms [DWT-DCT-based and DCT- fuzzy c-means (FCM)-based algorithms]. Experimental results indicate that the proposed algorithm outperforms other algorithms in terms of robustness against several types of attacks, such as noise addition (Gaussian noise, salt and pepper noise), rotation, Gaussian low-pass filtering, mean filtering, median filtering, Gaussian blur, image sharpening, histogram equalization, and JPEG compression. In addition, the proposed algorithm achieves higher values of peak signal-to-noise ratio (approximately 49 dB) and lower values of measure-singular value decomposition (5.8 to 6.6) than other algorithms.

  12. Wavelet neural networks initialization using hybridized clustering and harmony search algorithm: Application in epileptic seizure detection

    NASA Astrophysics Data System (ADS)

    Zainuddin, Zarita; Lai, Kee Huong; Ong, Pauline

    2013-04-01

    Artificial neural networks (ANNs) are powerful mathematical models that are used to solve complex real world problems. Wavelet neural networks (WNNs), which were developed based on the wavelet theory, are a variant of ANNs. During the training phase of WNNs, several parameters need to be initialized; including the type of wavelet activation functions, translation vectors, and dilation parameter. The conventional k-means and fuzzy c-means clustering algorithms have been used to select the translation vectors. However, the solution vectors might get trapped at local minima. In this regard, the evolutionary harmony search algorithm, which is capable of searching for near-optimum solution vectors, both locally and globally, is introduced to circumvent this problem. In this paper, the conventional k-means and fuzzy c-means clustering algorithms were hybridized with the metaheuristic harmony search algorithm. In addition to obtaining the estimation of the global minima accurately, these hybridized algorithms also offer more than one solution to a particular problem, since many possible solution vectors can be generated and stored in the harmony memory. To validate the robustness of the proposed WNNs, the real world problem of epileptic seizure detection was presented. The overall classification accuracy from the simulation showed that the hybridized metaheuristic algorithms outperformed the standard k-means and fuzzy c-means clustering algorithms.

  13. Cyclic cooling algorithm

    SciTech Connect

    Rempp, Florian; Mahler, Guenter; Michel, Mathias

    2007-09-15

    We introduce a scheme to perform the cooling algorithm, first presented by Boykin et al. in 2002, for an arbitrary number of times on the same set of qbits. We achieve this goal by adding an additional SWAP gate and a bath contact to the algorithm. This way one qbit may repeatedly be cooled without adding additional qbits to the system. By using a product Liouville space to model the bath contact we calculate the density matrix of the system after a given number of applications of the algorithm.

  14. Parallel algorithms and architectures

    SciTech Connect

    Albrecht, A.; Jung, H.; Mehlhorn, K.

    1987-01-01

    Contents of this book are the following: Preparata: Deterministic simulation of idealized parallel computers on more realistic ones; Convex hull of randomly chosen points from a polytope; Dataflow computing; Parallel in sequence; Towards the architecture of an elementary cortical processor; Parallel algorithms and static analysis of parallel programs; Parallel processing of combinatorial search; Communications; An O(nlogn) cost parallel algorithms for the single function coarsest partition problem; Systolic algorithms for computing the visibility polygon and triangulation of a polygonal region; and RELACS - A recursive layout computing system. Parallel linear conflict-free subtree access.

  15. The Algorithm Selection Problem

    NASA Technical Reports Server (NTRS)

    Minton, Steve; Allen, John; Deiss, Ron (Technical Monitor)

    1994-01-01

    Work on NP-hard problems has shown that many instances of these theoretically computationally difficult problems are quite easy. The field has also shown that choosing the right algorithm for the problem can have a profound effect on the time needed to find a solution. However, to date there has been little work showing how to select the right algorithm for solving any particular problem. The paper refers to this as the algorithm selection problem. It describes some of the aspects that make this problem difficult, as well as proposes a technique for addressing it.

  16. Effects of deformable registration algorithms on the creation of statistical maps for preoperative targeting in deep brain stimulation procedures

    NASA Astrophysics Data System (ADS)

    Liu, Yuan; D'Haese, Pierre-Francois; Dawant, Benoit M.

    2014-03-01

    Deep brain stimulation, which is used to treat various neurological disorders, involves implanting a permanent electrode into precise targets deep in the brain. Accurate pre-operative localization of the targets on pre-operative MRI sequence is challenging as these are typically located in homogenous regions with poor contrast. Population-based statistical atlases can assist with this process. Such atlases are created by acquiring the location of efficacious regions from numerous subjects and projecting them onto a common reference image volume using some normalization method. In previous work, we presented results concluding that non-rigid registration provided the best result for such normalization. However, this process could be biased by the choice of the reference image and/or registration approach. In this paper, we have qualitatively and quantitatively compared the performance of six recognized deformable registration methods at normalizing such data in poor contrasted regions onto three different reference volumes using a unique set of data from 100 patients. We study various metrics designed to measure the centroid, spread, and shape of the normalized data. This study leads to a total of 1800 deformable registrations and results show that statistical atlases constructed using different deformable registration methods share comparable centroids and spreads with marginal differences in their shape. Among the six methods being studied, Diffeomorphic Demons produces the largest spreads and centroids that are the furthest apart from the others in general. Among the three atlases, one atlas consistently outperforms the other two with smaller spreads for each algorithm. However, none of the differences in the spreads were found to be statistically significant, across different algorithms or across different atlases.

  17. Performance Comparison of Attribute Set Reduction Algorithms in Stock Price Prediction - A Case Study on Indian Stock Data

    NASA Astrophysics Data System (ADS)

    Sivakumar, P. Bagavathi; Mohandas, V. P.

    Stock price prediction and stock trend prediction are the two major research problems of financial time series analysis. In this work, performance comparison of various attribute set reduction algorithms were made for short term stock price prediction. Forward selection, backward elimination, optimized selection, optimized selection based on brute force, weight guided and optimized selection based on the evolutionary principle and strategy was used. Different selection schemes and cross over types were explored. To supplement learning and modeling, support vector machine was also used in combination. The algorithms were applied on a real time Indian stock data namely CNX Nifty. The experimental study was conducted using the open source data mining tool Rapidminer. The performance was compared in terms of root mean squared error, squared error and execution time. The obtained results indicates the superiority of evolutionary algorithms and the optimize selection algorithm based on evolutionary principles outperforms others.

  18. Searching for repeats, as an example of using the generalised Ruzzo-Tompa algorithm to find optimal subsequences with gaps.

    PubMed

    Spouge, John L; Mariño-Ramírez, Leonardo; Sheetlin, Sergey L

    2014-01-01

    Some biological sequences contain subsequences of unusual composition; e.g. some proteins contain DNA binding domains, transmembrane regions and charged regions, and some DNA sequences contain repeats. The linear-time Ruzzo-Tompa (RT) algorithm finds subsequences of unusual composition, using a sequence of scores as input and the corresponding 'maximal segments' as output. In principle, permitting gaps in the output subsequences could improve sensitivity. Here, the input of the RT algorithm is generalised to a finite, totally ordered, weighted graph, so the algorithm locates paths of maximal weight through increasing but not necessarily adjacent vertices. By permitting the penalised deletion of unfavourable letters, the generalisation therefore includes gaps. The program RepWords, which finds inexact simple repeats in DNA, exemplifies the general concepts by out-performing a similar extant, ad hoc tool. With minimal programming effort, the generalised Ruzzo-Tompa algorithm could improve the performance of many programs for finding biological subsequences of unusual composition. PMID:24989859

  19. Interval-value Based Particle Swarm Optimization algorithm for cancer-type specific gene selection and sample classification

    PubMed Central

    Ramyachitra, D.; Sofia, M.; Manikandan, P.

    2015-01-01

    Microarray technology allows simultaneous measurement of the expression levels of thousands of genes within a biological tissue sample. The fundamental power of microarrays lies within the ability to conduct parallel surveys of gene expression using microarray data. The classification of tissue samples based on gene expression data is an important problem in medical diagnosis of diseases such as cancer. In gene expression data, the number of genes is usually very high compared to the number of data samples. Thus the difficulty that lies with data are of high dimensionality and the sample size is small. This research work addresses the problem by classifying resultant dataset using the existing algorithms such as Support Vector Machine (SVM), K-nearest neighbor (KNN), Interval Valued Classification (IVC) and the improvised Interval Value based Particle Swarm Optimization (IVPSO) algorithm. Thus the results show that the IVPSO algorithm outperformed compared with other algorithms under several performance evaluation functions. PMID:26484222

  20. Interval-value Based Particle Swarm Optimization algorithm for cancer-type specific gene selection and sample classification.

    PubMed

    Ramyachitra, D; Sofia, M; Manikandan, P

    2015-09-01

    Microarray technology allows simultaneous measurement of the expression levels of thousands of genes within a biological tissue sample. The fundamental power of microarrays lies within the ability to conduct parallel surveys of gene expression using microarray data. The classification of tissue samples based on gene expression data is an important problem in medical diagnosis of diseases such as cancer. In gene expression data, the number of genes is usually very high compared to the number of data samples. Thus the difficulty that lies with data are of high dimensionality and the sample size is small. This research work addresses the problem by classifying resultant dataset using the existing algorithms such as Support Vector Machine (SVM), K-nearest neighbor (KNN), Interval Valued Classification (IVC) and the improvised Interval Value based Particle Swarm Optimization (IVPSO) algorithm. Thus the results show that the IVPSO algorithm outperformed compared with other algorithms under several performance evaluation functions. PMID:26484222

  1. A junction-tree based learning algorithm to optimize network wide traffic control: A coordinated multi-agent framework

    SciTech Connect

    Zhu, Feng; Aziz, H. M. Abdul; Qian, Xinwu; Ukkusuri, Satish V.

    2015-01-31

    Our study develops a novel reinforcement learning algorithm for the challenging coordinated signal control problem. Traffic signals are modeled as intelligent agents interacting with the stochastic traffic environment. The model is built on the framework of coordinated reinforcement learning. The Junction Tree Algorithm (JTA) based reinforcement learning is proposed to obtain an exact inference of the best joint actions for all the coordinated intersections. Moreover, the algorithm is implemented and tested with a network containing 18 signalized intersections in VISSIM. Finally, our results show that the JTA based algorithm outperforms independent learning (Q-learning), real-time adaptive learning, and fixed timing plans in terms of average delay, number of stops, and vehicular emissions at the network level.

  2. A junction-tree based learning algorithm to optimize network wide traffic control: A coordinated multi-agent framework

    DOE PAGESBeta

    Zhu, Feng; Aziz, H. M. Abdul; Qian, Xinwu; Ukkusuri, Satish V.

    2015-01-31

    Our study develops a novel reinforcement learning algorithm for the challenging coordinated signal control problem. Traffic signals are modeled as intelligent agents interacting with the stochastic traffic environment. The model is built on the framework of coordinated reinforcement learning. The Junction Tree Algorithm (JTA) based reinforcement learning is proposed to obtain an exact inference of the best joint actions for all the coordinated intersections. Moreover, the algorithm is implemented and tested with a network containing 18 signalized intersections in VISSIM. Finally, our results show that the JTA based algorithm outperforms independent learning (Q-learning), real-time adaptive learning, and fixed timing plansmore » in terms of average delay, number of stops, and vehicular emissions at the network level.« less

  3. Annealed Importance Sampling Reversible Jump MCMC algorithms

    SciTech Connect

    Karagiannis, Georgios; Andrieu, Christophe

    2013-03-20

    It will soon be 20 years since reversible jump Markov chain Monte Carlo (RJ-MCMC) algorithms have been proposed. They have significantly extended the scope of Markov chain Monte Carlo simulation methods, offering the promise to be able to routinely tackle transdimensional sampling problems, as encountered in Bayesian model selection problems for example, in a principled and flexible fashion. Their practical efficient implementation, however, still remains a challenge. A particular difficulty encountered in practice is in the choice of the dimension matching variables (both their nature and their distribution) and the reversible transformations which allow one to define the one-to-one mappings underpinning the design of these algorithms. Indeed, even seemingly sensible choices can lead to algorithms with very poor performance. The focus of this paper is the development and performance evaluation of a method, annealed importance sampling RJ-MCMC (aisRJ), which addresses this problem by mitigating the sensitivity of RJ-MCMC algorithms to the aforementioned poor design. As we shall see the algorithm can be understood as being an “exact approximation” of an idealized MCMC algorithm that would sample from the model probabilities directly in a model selection set-up. Such an idealized algorithm may have good theoretical convergence properties, but typically cannot be implemented, and our algorithms can approximate the performance of such idealized algorithms to an arbitrary degree while not introducing any bias for any degree of approximation. Our approach combines the dimension matching ideas of RJ-MCMC with annealed importance sampling and its Markov chain Monte Carlo implementation. We illustrate the performance of the algorithm with numerical simulations which indicate that, although the approach may at first appear computationally involved, it is in fact competitive.

  4. Classification of urban vegetation patterns from hyperspectral imagery: hybrid algorithm based on genetic algorithm tuned fuzzy support vector machine

    NASA Astrophysics Data System (ADS)

    Zhou, Mandi; Shu, Jiong; Chen, Zhigang; Ji, Minhe

    2012-11-01

    Hyperspectral imagery has been widely used in terrain classification for its high resolution. Urban vegetation, known as an essential part of the urban ecosystem, can be difficult to discern due to high similarity of spectral signatures among some land-cover classes. In this paper, we investigate a hybrid approach of the genetic-algorithm tuned fuzzy support vector machine (GA-FSVM) technique and apply it to urban vegetation classification from aerial hyperspectral urban imagery. The approach adopts the genetic algorithm to optimize parameters of support vector machine, and employs the K-nearest neighbor algorithm to calculate the membership function for each fuzzy parameter, aiming to reduce the effects of the isolated and noisy samples. Test data come from push-broom hyperspectral imager (PHI) hyperspectral remote sensing image which partially covers a corner of the Shanghai World Exposition Park, while PHI is a hyper-spectral sensor developed by Shanghai Institute of Technical Physics. Experimental results show the GA-FSVM model generates overall accuracy of 71.2%, outperforming the maximum likelihood classifier with 49.4% accuracy and the artificial neural network method with 60.8% accuracy. It indicates GA-FSVM is a promising model for vegetation classification from hyperspectral urban data, and has good advantage in the application of classification involving abundant mixed pixels and small samples problem.

  5. Improved Bat Algorithm Applied to Multilevel Image Thresholding

    PubMed Central

    2014-01-01

    Multilevel image thresholding is a very important image processing technique that is used as a basis for image segmentation and further higher level processing. However, the required computational time for exhaustive search grows exponentially with the number of desired thresholds. Swarm intelligence metaheuristics are well known as successful and efficient optimization methods for intractable problems. In this paper, we adjusted one of the latest swarm intelligence algorithms, the bat algorithm, for the multilevel image thresholding problem. The results of testing on standard benchmark images show that the bat algorithm is comparable with other state-of-the-art algorithms. We improved standard bat algorithm, where our modifications add some elements from the differential evolution and from the artificial bee colony algorithm. Our new proposed improved bat algorithm proved to be better than five other state-of-the-art algorithms, improving quality of results in all cases and significantly improving convergence speed. PMID:25165733

  6. Improved bat algorithm applied to multilevel image thresholding.

    PubMed

    Alihodzic, Adis; Tuba, Milan

    2014-01-01

    Multilevel image thresholding is a very important image processing technique that is used as a basis for image segmentation and further higher level processing. However, the required computational time for exhaustive search grows exponentially with the number of desired thresholds. Swarm intelligence metaheuristics are well known as successful and efficient optimization methods for intractable problems. In this paper, we adjusted one of the latest swarm intelligence algorithms, the bat algorithm, for the multilevel image thresholding problem. The results of testing on standard benchmark images show that the bat algorithm is comparable with other state-of-the-art algorithms. We improved standard bat algorithm, where our modifications add some elements from the differential evolution and from the artificial bee colony algorithm. Our new proposed improved bat algorithm proved to be better than five other state-of-the-art algorithms, improving quality of results in all cases and significantly improving convergence speed. PMID:25165733

  7. A Simple Calculator Algorithm.

    ERIC Educational Resources Information Center

    Cook, Lyle; McWilliam, James

    1983-01-01

    The problem of finding cube roots when limited to a calculator with only square root capability is discussed. An algorithm is demonstrated and explained which should always produce a good approximation within a few iterations. (MP)

  8. Cloud model bat algorithm.

    PubMed

    Zhou, Yongquan; Xie, Jian; Li, Liangliang; Ma, Mingzhi

    2014-01-01

    Bat algorithm (BA) is a novel stochastic global optimization algorithm. Cloud model is an effective tool in transforming between qualitative concepts and their quantitative representation. Based on the bat echolocation mechanism and excellent characteristics of cloud model on uncertainty knowledge representation, a new cloud model bat algorithm (CBA) is proposed. This paper focuses on remodeling echolocation model based on living and preying characteristics of bats, utilizing the transformation theory of cloud model to depict the qualitative concept: "bats approach their prey." Furthermore, Lévy flight mode and population information communication mechanism of bats are introduced to balance the advantage between exploration and exploitation. The simulation results show that the cloud model bat algorithm has good performance on functions optimization. PMID:24967425

  9. Line Thinning Algorithm

    NASA Astrophysics Data System (ADS)

    Feigin, G.; Ben-Yosef, N.

    1983-10-01

    A thinning algorithm, of the banana-peel type, is presented. In each iteration pixels are attacked from all directions (there are no sub-iterations), and the deletion criteria depend on the 24 nearest neighbours.

  10. Diagnostic Algorithm Benchmarking

    NASA Technical Reports Server (NTRS)

    Poll, Scott

    2011-01-01

    A poster for the NASA Aviation Safety Program Annual Technical Meeting. It describes empirical benchmarking on diagnostic algorithms using data from the ADAPT Electrical Power System testbed and a diagnostic software framework.

  11. Algorithmically specialized parallel computers

    SciTech Connect

    Snyder, L.; Jamieson, L.H.; Gannon, D.B.; Siegel, H.J.

    1985-01-01

    This book is based on a workshop which dealt with array processors. Topics considered include algorithmic specialization using VLSI, innovative architectures, signal processing, speech recognition, image processing, specialized architectures for numerical computations, and general-purpose computers.

  12. Interpreting the flock algorithm from a statistical perspective.

    PubMed

    Anderson, Eric C; Barry, Patrick D

    2015-09-01

    We show that the algorithm in the program flock (Duchesne & Turgeon 2009) can be interpreted as an estimation procedure based on a model essentially identical to the structure (Pritchard et al. 2000) model with no admixture and without correlated allele frequency priors. Rather than using MCMC, the flock algorithm searches for the maximum a posteriori estimate of this structure model via a simulated annealing algorithm with a rapid cooling schedule (namely, the exponent on the objective function →∞). We demonstrate the similarities between the two programs in a two-step approach. First, to enable rapid batch processing of many simulated data sets, we modified the source code of structure to use the flock algorithm, producing the program flockture. With simulated data, we confirmed that results obtained with flock and flockture are very similar (though flockture is some 200 times faster). Second, we simulated multiple large data sets under varying levels of population differentiation for both microsatellite and SNP genotypes. We analysed them with flockture and structure and assessed each program on its ability to cluster individuals to their correct subpopulation. We show that flockture yields results similar to structure albeit with greater variability from run to run. flockture did perform better than structure when genotypes were composed of SNPs and differentiation was moderate (FST= 0.022-0.032). When differentiation was low, structure outperformed flockture for both marker types. On large data sets like those we simulated, it appears that flock's reliance on inference rules regarding its 'plateau record' is not helpful. Interpreting flock's algorithm as a special case of the model in structure should aid in understanding the program's output and behaviour. PMID:25913195

  13. Echinometra sea urchins acclimatized to elevated pCO2 at volcanic vents outperform those under present-day pCO2 conditions.

    PubMed

    Uthicke, Sven; Ebert, Thomas; Liddy, Michelle; Johansson, Charlotte; Fabricius, Katharina E; Lamare, Miles

    2016-07-01

    Rising atmospheric CO2 concentrations will significantly reduce ocean pH during the 21st century (ocean acidification, OA). This may hamper calcification in marine organisms such as corals and echinoderms, as shown in many laboratory-based experiments. Sea urchins are considered highly vulnerable to OA. We studied an Echinometra species on natural volcanic CO2 vents in Papua New Guinea, where they are CO2 -acclimatized and also subjected to secondary ecological changes from elevated CO2 . Near the vent site, the urchins experienced large daily variations in pH (>1 unit) and pCO2 (>2000 ppm) and average pH values (pHT 7.73) much below those expected under the most pessimistic future emission scenarios. Growth was measured over a 17-month period using tetracycline tagging of the calcareous feeding lanterns. Average-sized urchins grew more than twice as fast at the vent compared with those at an adjacent control site and assumed larger sizes at the vent compared to the control site and two other sites at another reef near-by. A small reduction in gonad weight was detected at the vents, but no differences in mortality, respiration, or degree of test calcification were detected between urchins from vent and control populations. Thus, urchins did not only persist but actually 'thrived' under extreme CO2 conditions. We suggest an ecological basis for this response: Increased algal productivity under increased pCO2 provided more food at the vent, resulting in higher growth rates. The wider implication of our observation is that laboratory studies on non-acclimatized specimens, which typically do not consider ecological changes, can lead to erroneous conclusions on responses to global change. PMID:26762613

  14. 454 next generation-sequencing outperforms allele-specific PCR, Sanger sequencing, and pyrosequencing for routine KRAS mutation analysis of formalin-fixed, paraffin-embedded samples

    PubMed Central

    Altimari, Annalisa; de Biase, Dario; De Maglio, Giovanna; Gruppioni, Elisa; Capizzi, Elisa; Degiovanni, Alessio; D’Errico, Antonia; Pession, Annalisa; Pizzolitto, Stefano; Fiorentino, Michelangelo; Tallini, Giovanni

    2013-01-01

    Detection of KRAS mutations in archival pathology samples is critical for therapeutic appropriateness of anti-EGFR monoclonal antibodies in colorectal cancer. We compared the sensitivity, specificity, and accuracy of Sanger sequencing, ARMS-Scorpion (TheraScreen®) real-time polymerase chain reaction (PCR), pyrosequencing, chip array hybridization, and 454 next-generation sequencing to assess KRAS codon 12 and 13 mutations in 60 nonconsecutive selected cases of colorectal cancer. Twenty of the 60 cases were detected as wild-type KRAS by all methods with 100% specificity. Among the 40 mutated cases, 13 were discrepant with at least one method. The sensitivity was 85%, 90%, 93%, and 92%, and the accuracy was 90%, 93%, 95%, and 95% for Sanger sequencing, TheraScreen real-time PCR, pyrosequencing, and chip array hybridization, respectively. The main limitation of Sanger sequencing was its low analytical sensitivity, whereas TheraScreen real-time PCR, pyrosequencing, and chip array hybridization showed higher sensitivity but suffered from the limitations of predesigned assays. Concordance between the methods was k = 0.79 for Sanger sequencing and k > 0.85 for the other techniques. Tumor cell enrichment correlated significantly with the abundance of KRAS-mutated deoxyribonucleic acid (DNA), evaluated as ΔCt for TheraScreen real-time PCR (P = 0.03), percentage of mutation for pyrosequencing (P = 0.001), ratio for chip array hybridization (P = 0.003), and percentage of mutation for 454 next-generation sequencing (P = 0.004). Also, 454 next-generation sequencing showed the best cross correlation for quantification of mutation abundance compared with all the other methods (P < 0.001). Our comparison showed the superiority of next-generation sequencing over the other techniques in terms of sensitivity and specificity. Next-generation sequencing will replace Sanger sequencing as the reference technique for diagnostic detection of KRAS mutation in archival tumor tissues. PMID

  15. OpenEIS Algorithms

    2013-07-29

    The OpenEIS Algorithm package seeks to provide a low-risk path for building owners, service providers and managers to explore analytical methods for improving building control and operational efficiency. Users of this software can analyze building data, and learn how commercial implementations would provide long-term value. The code also serves as a reference implementation for developers who wish to adapt the algorithms for use in commercial tools or service offerings.

  16. An Exact Algorithm to Compute the Double-Cut-and-Join Distance for Genomes with Duplicate Genes.

    PubMed

    Shao, Mingfu; Lin, Yu; Moret, Bernard M E

    2015-05-01

    Computing the edit distance between two genomes is a basic problem in the study of genome evolution. The double-cut-and-join (DCJ) model has formed the basis for most algorithmic research on rearrangements over the last few years. The edit distance under the DCJ model can be computed in linear time for genomes without duplicate genes, while the problem becomes NP-hard in the presence of duplicate genes. In this article, we propose an integer linear programming (ILP) formulation to compute the DCJ distance between two genomes with duplicate genes. We also provide an efficient preprocessing approach to simplify the ILP formulation while preserving optimality. Comparison on simulated genomes demonstrates that our method outperforms MSOAR in computing the edit distance, especially when the genomes contain long duplicated segments. We also apply our method to assign orthologous gene pairs among human, mouse, and rat genomes, where once again our method outperforms MSOAR. PMID:25517208

  17. SPEQTACLE: An automated generalized fuzzy C-means algorithm for tumor delineation in PET

    SciTech Connect

    Lapuyade-Lahorgue, Jérôme; Visvikis, Dimitris; Hatt, Mathieu; Pradier, Olivier; Cheze Le Rest, Catherine

    2015-10-15

    Purpose: Accurate tumor delineation in positron emission tomography (PET) images is crucial in oncology. Although recent methods achieved good results, there is still room for improvement regarding tumors with complex shapes, low signal-to-noise ratio, and high levels of uptake heterogeneity. Methods: The authors developed and evaluated an original clustering-based method called spatial positron emission quantification of tumor—Automatic Lp-norm estimation (SPEQTACLE), based on the fuzzy C-means (FCM) algorithm with a generalization exploiting a Hilbertian norm to more accurately account for the fuzzy and non-Gaussian distributions of PET images. An automatic and reproducible estimation scheme of the norm on an image-by-image basis was developed. Robustness was assessed by studying the consistency of results obtained on multiple acquisitions of the NEMA phantom on three different scanners with varying acquisition parameters. Accuracy was evaluated using classification errors (CEs) on simulated and clinical images. SPEQTACLE was compared to another FCM implementation, fuzzy local information C-means (FLICM) and fuzzy locally adaptive Bayesian (FLAB). Results: SPEQTACLE demonstrated a level of robustness similar to FLAB (variability of 14% ± 9% vs 14% ± 7%, p = 0.15) and higher than FLICM (45% ± 18%, p < 0.0001), and improved accuracy with lower CE (14% ± 11%) over both FLICM (29% ± 29%) and FLAB (22% ± 20%) on simulated images. Improvement was significant for the more challenging cases with CE of 17% ± 11% for SPEQTACLE vs 28% ± 22% for FLAB (p = 0.009) and 40% ± 35% for FLICM (p < 0.0001). For the clinical cases, SPEQTACLE outperformed FLAB and FLICM (15% ± 6% vs 37% ± 14% and 30% ± 17%, p < 0.004). Conclusions: SPEQTACLE benefitted from the fully automatic estimation of the norm on a case-by-case basis. This promising approach will be extended to multimodal images and multiclass estimation in future developments.

  18. An automatic precipitation phase distinction algorithm for optical disdrometer data over the global ocean

    NASA Astrophysics Data System (ADS)

    Burdanowitz, J.; Klepp, C.; Bakan, S.

    2015-12-01

    The lack of high quality in situ surface precipitation data over the global ocean so far limits the capability to validate satellite precipitation retrievals. The first systematic ship-based surface precipitation dataset OceanRAIN (Ocean Rainfall And Ice-phase precipitation measurement Network) aims at providing a comprehensive statistical basis of in situ precipitation reference data from optical disdrometers at 1 min resolution deployed on various research vessels (RVs). Deriving the precipitation rate for rain and snow requires a priori knowledge of the precipitation phase (PP). Therefore, we present an automatic PP distinction algorithm using available data based on more than four years of atmospheric measurements onboard RV Polarstern that covers all climatic regions of the Atlantic Ocean. A time-consuming manual PP distinction within the OceanRAIN post-processing serves as reference, mainly based on 3 hourly present weather information from a human observer. For automation, we find that the combination of air temperature, relative humidity and 99th percentile of the particle diameter predicts best the PP with respect to the manually determined PP. Excluding mixed-phase, this variable combination reaches an accuracy of 91 % when compared to the manually determined PP for about 149 000 min of precipitation from RV Polarstern. Including mixed-phase (165 000 min), 81.2 % accuracy are reached with a slight snow overprediction bias of 0.93 for two independent PP distributions. In that respect, a method using two independent PP distributions outperforms a method based on only one PP distribution. The new statistical automatic PP distinction method significantly speeds up the data post-processing within OceanRAIN while introducing an objective PP probability for each PP at 1 min resolution.

  19. Recent processing string and fusion algorithm improvements for automated sea mine classification in shallow water

    NASA Astrophysics Data System (ADS)

    Aridgides, Tom; Fernandez, Manuel F.; Dobeck, Gerald J.

    2003-09-01

    A novel sea mine computer-aided-detection / computer-aided-classification (CAD/CAC) processing string has been developed. The overall CAD/CAC processing string consists of pre-processing, adaptive clutter filtering (ACF), normalization, detection, feature extraction, feature orthogonalization, optimal subset feature selection, classification and fusion processing blocks. The range-dimension ACF is matched both to average highlight and shadow information, while also adaptively suppressing background clutter. For each detected object, features are extracted and processed through an orthogonalization transformation, enabling an efficient application of the optimal log-likelihood-ratio-test (LLRT) classification rule, in the orthogonal feature space domain. The classified objects of 4 distinct processing strings are fused using the classification confidence values as features and logic-based, "M-out-of-N", or LLRT-based fusion rules. The utility of the overall processing strings and their fusion was demonstrated with new shallow water high-resolution sonar imagery data. The processing string detection and classification parameters were tuned and the string classification performance was optimized, by appropriately selecting a subset of the original feature set. A significant improvement was made to the CAD/CAC processing string by utilizing a repeated application of the subset feature selection / LLRT classification blocks. It was shown that LLRT-based fusion algorithms outperform the logic based and the "M-out-of-N" ones. The LLRT-based fusion of the CAD/CAC processing strings resulted in up to a nine-fold false alarm rate reduction, compared to the best single CAD/CAC processing string results, while maintaining a constant correct mine classification probability.

  20. Novel Algorithms for the Identification of Biologically Informative Chemical Diversity Metrics

    PubMed Central

    Theertham, Bhargav; Wang, Jenna. L.; Fang, Jianwen; Lushington, Gerald H.

    2009-01-01

    Despite great advances in the efficiency of analytical and synthetic chemistry, time and available starting material still limit the number of unique compounds that can be practically synthesized and evaluated as prospective therapeutics. Chemical diversity analysis (the capacity to identify finite diverse subsets that reliably represent greater manifolds of drug-like chemicals) thus remains an important resource in drug discovery. Despite an unproven track record, chemical diversity has also been used to posit, from preliminary screen hits, new compounds with similar or better activity. Identifying diversity metrics that demonstrably encode bioactivity trends is thus of substantial potential value for intelligent assembly of targeted screens. This paper reports novel algorithms designed to simultaneously reflect chemical similarity or diversity trends and apparent bioactivity in compound collections. An extensive set of descriptors are evaluated within large NCI screening data sets according to bioactivity differentiation capacities, quantified as the ability to co-localize known active species into bioactive-rich K-means clusters. One method tested for descriptor selection orders features according to relative variance across a set of training compounds, and samples increasingly finer subset meshes for descriptors whose exclusion from the model induces drastic drops in relative bioactive colocalization. This yields metrics with reasonable bioactive enrichment (greater than 50% of all bioactive compounds collected into clusters or cells with significantly enriched active/inactive rates) for each of the four data sets examined herein. A second method replaces variance by an active/inactive divergence score, achieving comparable enrichment via a much more efficient search process. Combinations of the above metrics are tested in 2D rectilinear diversity models, achieving similarly successful colocalization statistics, with metrics derived from the active

  1. A novel coupling of noise reduction algorithms for particle flow simulations

    NASA Astrophysics Data System (ADS)

    Zimoń, M. J.; Reese, J. M.; Emerson, D. R.

    2016-09-01

    Proper orthogonal decomposition (POD) and its extension based on time-windows have been shown to greatly improve the effectiveness of recovering smooth ensemble solutions from noisy particle data. However, to successfully de-noise any molecular system, a large number of measurements still need to be provided. In order to achieve a better efficiency in processing time-dependent fields, we have combined POD with a well-established signal processing technique, wavelet-based thresholding. In this novel hybrid procedure, the wavelet filtering is applied within the POD domain and referred to as WAVinPOD. The algorithm exhibits promising results when applied to both synthetically generated signals and particle data. In this work, the simulations compare the performance of our new approach with standard POD or wavelet analysis in extracting smooth profiles from noisy velocity and density fields. Numerical examples include molecular dynamics and dissipative particle dynamics simulations of unsteady force- and shear-driven liquid flows, as well as phase separation phenomenon. Simulation results confirm that WAVinPOD preserves the dimensionality reduction obtained using POD, while improving its filtering properties through the sparse representation of data in wavelet basis. This paper shows that WAVinPOD outperforms the other estimators for both synthetically generated signals and particle-based measurements, achieving a higher signal-to-noise ratio from a smaller number of samples. The new filtering methodology offers significant computational savings, particularly for multi-scale applications seeking to couple continuum informations with atomistic models. It is the first time that a rigorous analysis has compared de-noising techniques for particle-based fluid simulations.

  2. Optimization of geometry and modeling parameters of artificial neural networks using genetic algorithms

    NASA Astrophysics Data System (ADS)

    Sahoo, G. B.

    2007-12-01

    In recent years, artificial neural networks (ANNs) appear to be viable alternative to models that use phenomenological hypotheses (i.e. knowledge based models) for cases (1) the available data are not detailed and sufficient for using a process based model and (2) the detailed complex physics of the system is partially understood. ANNs have been widely used in many fields such as chemical and environmental engineering, hydrology, and water resources applications for optimum prediction of system parameters and variables. However, in most cases, parameters and system variables were forecasted employing suboptimal ANNs. The geometry and modeling parameters of an artificial neural network (ANN) and the training dataset have significant effects on its predictive performance efficiency. The combination of ANN modeling parameter and geometry arranged in the modeling domain (i.e. lower and upper bounds of each modeling parameter and geometry) is large enough (i.e. greater than 100000) that it is difficult to examine all cases using trial and error approach for the selection of an optimum set. Thus, one could easily end up with finding a set of suboptimal values. This study presents the use of genetic algorithms (GAs) to search for the optimal geometry and values of modeling parameters of a multilayer feedforward backpropagation neural network (BPNN) and a radial basis function network (RBFN). The predictive performance efficiency of the GA and ANN combination is examined using two datasets derived from the same population for training. It is illustrated that (1) the GA optimized ANN outperforms to the ANN using a trial and error approach, and (2) ANN predictive performance and geometry depend on the number of samples and the characteristics of samples included in the training dataset.

  3. A Cascade Random Forests Algorithm for Predicting Protein-Protein Interaction Sites.

    PubMed

    Wei, Zhi-Sen; Yang, Jing-Yu; Shen, Hong-Bin; Yu, Dong-Jun

    2015-10-01

    Protein-protein interactions exist ubiquitously and play important roles in the life cycles of living cells. The interaction sites (residues) are essential to understanding the underlying mechanisms of protein-protein interactions. Previous research has demonstrated that the accurate identification of protein-protein interaction sites (PPIs) is helpful for developing new therapeutic drugs because many drugs will interact directly with those residues. Because of its significant potential in biological research and drug development, the prediction of PPIs has become an important topic in computational biology. However, a severe data imbalance exists in the PPIs prediction problem, where the number of the majority class samples (non-interacting residues) is far larger than that of the minority class samples (interacting residues). Thus, we developed a novel cascade random forests algorithm (CRF) to address the serious data imbalance that exists in the PPIs prediction problem. The proposed CRF resolves the negative effect of data imbalance by connecting multiple random forests in a cascade-like manner, each of which is trained with a balanced training subset that includes all minority samples and a subset of majority samples using an effective ensemble protocol. Based on the proposed CRF, we implemented a new sequence-based PPIs predictor, called CRF-PPI, which takes the combined features of position-specific scoring matrices, averaged cumulative hydropathy, and predicted relative solvent accessibility as model inputs. Benchmark experiments on both the cross validation and independent validation datasets demonstrated that the proposed CRF-PPI outperformed the state-of-the-art sequence-based PPIs predictors. The source code for CRF-PPI and the benchmark datasets are available online at http://csbio.njust.edu.cn/bioinf/CRF-PPI for free academic use. PMID:26441427

  4. Versatility of the CFR algorithm for limited angle reconstruction

    SciTech Connect

    Fujieda, I.; Heiskanen, K.; Perez-Mendez, V. )

    1990-04-01

    The constrained Fourier reconstruction (CFR) algorithm and the iterative reconstruction-reprojection (IRR) algorithm are evaluated based on their accuracy for three types of limited angle reconstruction problems. The cFR algorithm performs better for problems such as Xray CT imaging of a nuclear reactor core with one large data gap due to structural blocking of the source and detector pair. For gated heart imaging by Xray CT, radioisotope distribution imaging by PET or SPECT, using a polygonal array of gamma cameras with insensitive gaps between camera boundaries, the IRR algorithm has a slight advantage over the CFR algorithm but the difference is not significant.

  5. Image change detection algorithms: a systematic survey.

    PubMed

    Radke, Richard J; Andra, Srinivas; Al-Kofahi, Omar; Roysam, Badrinath

    2005-03-01

    Detecting regions of change in multiple images of the same scene taken at different times is of widespread interest due to a large number of applications in diverse disciplines, including remote sensing, surveillance, medical diagnosis and treatment, civil infrastructure, and underwater sensing. This paper presents a systematic survey of the common processing steps and core decision rules in modern change detection algorithms, including significance and hypothesis testing, predictive models, the shading model, and background modeling. We also discuss important preprocessing methods, approaches to enforcing the consistency of the change mask, and principles for evaluating and comparing the performance of change detection algorithms. It is hoped that our classification of algorithms into a relatively small number of categories will provide useful guidance to the algorithm designer. PMID:15762326

  6. A comprehensive review of swarm optimization algorithms.

    PubMed

    Ab Wahab, Mohd Nadhir; Nefti-Meziani, Samia; Atyabi, Adham

    2015-01-01

    Many swarm optimization algorithms have been introduced since the early 60's, Evolutionary Programming to the most recent, Grey Wolf Optimization. All of these algorithms have demonstrated their potential to solve many optimization problems. This paper provides an in-depth survey of well-known optimization algorithms. Selected algorithms are briefly explained and compared with each other comprehensively through experiments conducted using thirty well-known benchmark functions. Their advantages and disadvantages are also discussed. A number of statistical tests are then carried out to determine the significant performances. The results indicate the overall advantage of Differential Evolution (DE) and is closely followed by Particle Swarm Optimization (PSO), compared with other considered approaches. PMID:25992655

  7. A Comprehensive Review of Swarm Optimization Algorithms

    PubMed Central

    2015-01-01

    Many swarm optimization algorithms have been introduced since the early 60’s, Evolutionary Programming to the most recent, Grey Wolf Optimization. All of these algorithms have demonstrated their potential to solve many optimization problems. This paper provides an in-depth survey of well-known optimization algorithms. Selected algorithms are briefly explained and compared with each other comprehensively through experiments conducted using thirty well-known benchmark functions. Their advantages and disadvantages are also discussed. A number of statistical tests are then carried out to determine the significant performances. The results indicate the overall advantage of Differential Evolution (DE) and is closely followed by Particle Swarm Optimization (PSO), compared with other considered approaches. PMID:25992655

  8. Optimal configuration algorithm of a satellite transponder

    NASA Astrophysics Data System (ADS)

    Sukhodoev, M. S.; Savenko, I. I.; Martynov, Y. A.; Savina, N. I.; Asmolovskiy, V. V.

    2016-04-01

    This paper describes the algorithm of determining the optimal transponder configuration of the communication satellite while in service. This method uses a mathematical model of the pay load scheme based on the finite-state machine. The repeater scheme is shown as a weighted oriented graph that is represented as plexus in the program view. This paper considers an algorithm example for application with a typical transparent repeater scheme. In addition, the complexity of the current algorithm has been calculated. The main peculiarity of this algorithm is that it takes into account the functionality and state of devices, reserved equipment and input-output ports ranged in accordance with their priority. All described limitations allow a significant decrease in possible payload commutation variants and enable a satellite operator to make reconfiguration solutions operatively.

  9. A Dynamic Navigation Algorithm Considering Network Disruptions

    NASA Astrophysics Data System (ADS)

    Jiang, J.; Wu, L.

    2014-04-01

    In traffic network, link disruptions or recoveries caused by sudden accidents, bad weather and traffic congestion, lead to significant increase or decrease in travel times on some network links. Similar situation also occurs in real-time emergency evacuation plan in indoor areas. As the dynamic nature of real-time network information generates better navigation solutions than the static one, a real-time dynamic navigation algorithm for emergency evacuation with stochastic disruptions or recoveries in the network is presented in this paper. Compared with traditional existing algorithms, this new algorithm adjusts pre-existing path to a new optimal one according to the changing link travel time. With real-time network information, it can provide the optional path quickly to adapt to the rapid changing network properties. Theoretical analysis and experimental results demonstrate that this proposed algorithm performs a high time efficiency to get exact solution and indirect information can be calculated in spare time.

  10. Optimisation of nonlinear motion cueing algorithm based on genetic algorithm

    NASA Astrophysics Data System (ADS)

    Asadi, Houshyar; Mohamed, Shady; Rahim Zadeh, Delpak; Nahavandi, Saeid

    2015-04-01

    Motion cueing algorithms (MCAs) are playing a significant role in driving simulators, aiming to deliver the most accurate human sensation to the simulator drivers compared with a real vehicle driver, without exceeding the physical limitations of the simulator. This paper provides the optimisation design of an MCA for a vehicle simulator, in order to find the most suitable washout algorithm parameters, while respecting all motion platform physical limitations, and minimising human perception error between real and simulator driver. One of the main limitations of the classical washout filters is that it is attuned by the worst-case scenario tuning method. This is based on trial and error, and is effected by driving and programmers experience, making this the most significant obstacle to full motion platform utilisation. This leads to inflexibility of the structure, production of false cues and makes the resulting simulator fail to suit all circumstances. In addition, the classical method does not take minimisation of human perception error and physical constraints into account. Production of motion cues and the impact of different parameters of classical washout filters on motion cues remain inaccessible for designers for this reason. The aim of this paper is to provide an optimisation method for tuning the MCA parameters, based on nonlinear filtering and genetic algorithms. This is done by taking vestibular sensation error into account between real and simulated cases, as well as main dynamic limitations, tilt coordination and correlation coefficient. Three additional compensatory linear blocks are integrated into the MCA, to be tuned in order to modify the performance of the filters successfully. The proposed optimised MCA is implemented in MATLAB/Simulink software packages. The results generated using the proposed method show increased performance in terms of human sensation, reference shape tracking and exploiting the platform more efficiently without reaching

  11. A survey of DNA motif finding algorithms

    PubMed Central

    Das, Modan K; Dai, Ho-Kwok

    2007-01-01

    Background Unraveling the mechanisms that regulate gene expression is a major challenge in biology. An important task in this challenge is to identify regulatory elements, especially the binding sites in deoxyribonucleic acid (DNA) for transcription factors. These binding sites are short DNA segments that are called motifs. Recent advances in genome sequence availability and in high-throughput gene expression analysis technologies have allowed for the development of computational methods for motif finding. As a result, a large number of motif finding algorithms have been implemented and applied to various motif models over the past decade. This survey reviews the latest developments in DNA motif finding algorithms. Results Earlier algorithms use promoter sequences of coregulated genes from single genome and search for statistically overrepresented motifs. Recent algorithms are designed to use phylogenetic footprinting or orthologous sequences and also an integrated approach where promoter sequences of coregulated genes and phylogenetic footprinting are used. All the algorithms studied have been reported to correctly detect the motifs that have been previously detected by laboratory experimental approaches, and some algorithms were able to find novel motifs. However, most of these motif finding algorithms have been shown to work successfully in yeast and other lower organisms, but perform significantly worse in higher organisms. Conclusion Despite considerable efforts to date, DNA motif finding remains a complex challenge for biologists and computer scientists. Researchers have taken many different approaches in developing motif discovery tools and the progress made in this area of research is very encouraging. Performance comparison of different motif finding tools and identification of the best tools have proven to be a difficult task because tools are designed based on algorithms and motif models that are diverse and complex and our incomplete understanding of

  12. Barzilai-Borwein method in graph drawing algorithm based on Kamada-Kawai algorithm

    NASA Astrophysics Data System (ADS)

    Hasal, Martin; Pospisil, Lukas; Nowakova, Jana

    2016-06-01

    Extension of Kamada-Kawai algorithm, which was designed for calculating layouts of simple undirected graphs, is presented in this paper. Graphs drawn by Kamada-Kawai algorithm exhibit symmetries, tend to produce aesthetically pleasing and crossing-free layouts for planar graphs. Minimization of Kamada-Kawai algorithm is based on Newton-Raphson method, which needs Hessian matrix of second derivatives of minimized node. Disadvantage of Kamada-Kawai embedder algorithm is computational requirements. This is caused by searching of minimal potential energy of the whole system, which is minimized node by node. The node with highest energy is minimized against all nodes till the local equilibrium state is reached. In this paper with Barzilai-Borwein (BB) minimization algorithm, which needs only gradient for minimum searching, instead of Newton-Raphson method, is worked. It significantly improves the computational time and requirements.

  13. Sampling Within k-Means Algorithm to Cluster Large Datasets

    SciTech Connect

    Bejarano, Jeremy; Bose, Koushiki; Brannan, Tyler; Thomas, Anita; Adragni, Kofi; Neerchal, Nagaraj; Ostrouchov, George

    2011-08-01

    Due to current data collection technology, our ability to gather data has surpassed our ability to analyze it. In particular, k-means, one of the simplest and fastest clustering algorithms, is ill-equipped to handle extremely large datasets on even the most powerful machines. Our new algorithm uses a sample from a dataset to decrease runtime by reducing the amount of data analyzed. We perform a simulation study to compare our sampling based k-means to the standard k-means algorithm by analyzing both the speed and accuracy of the two methods. Results show that our algorithm is significantly more efficient than the existing algorithm with comparable accuracy. Further work on this project might include a more comprehensive study both on more varied test datasets as well as on real weather datasets. This is especially important considering that this preliminary study was performed on rather tame datasets. Also, these datasets should analyze the performance of the algorithm on varied values of k. Lastly, this paper showed that the algorithm was accurate for relatively low sample sizes. We would like to analyze this further to see how accurate the algorithm is for even lower sample sizes. We could find the lowest sample sizes, by manipulating width and confidence level, for which the algorithm would be acceptably accurate. In order for our algorithm to be a success, it needs to meet two benchmarks: match the accuracy of the standard k-means algorithm and significantly reduce runtime. Both goals are accomplished for all six datasets analyzed. However, on datasets of three and four dimension, as the data becomes more difficult to cluster, both algorithms fail to obtain the correct classifications on some trials. Nevertheless, our algorithm consistently matches the performance of the standard algorithm while becoming remarkably more efficient with time. Therefore, we conclude that analysts can use our algorithm, expecting accurate results in considerably less time.

  14. An Enhanced Artificial Bee Colony Algorithm with Solution Acceptance Rule and Probabilistic Multisearch

    PubMed Central

    Yurtkuran, Alkın

    2016-01-01

    The artificial bee colony (ABC) algorithm is a popular swarm based technique, which is inspired from the intelligent foraging behavior of honeybee swarms. This paper proposes a new variant of ABC algorithm, namely, enhanced ABC with solution acceptance rule and probabilistic multisearch (ABC-SA) to address global optimization problems. A new solution acceptance rule is proposed where, instead of greedy selection between old solution and new candidate solution, worse candidate solutions have a probability to be accepted. Additionally, the acceptance probability of worse candidates is nonlinearly decreased throughout the search process adaptively. Moreover, in order to improve the performance of the ABC and balance the intensification and diversification, a probabilistic multisearch strategy is presented. Three different search equations with distinctive characters are employed using predetermined search probabilities. By implementing a new solution acceptance rule and a probabilistic multisearch approach, the intensification and diversification performance of the ABC algorithm is improved. The proposed algorithm has been tested on well-known benchmark functions of varying dimensions by comparing against novel ABC variants, as well as several recent state-of-the-art algorithms. Computational results show that the proposed ABC-SA outperforms other ABC variants and is superior to state-of-the-art algorithms proposed in the literature. PMID:26819591

  15. Algorithm for protecting light-trees in survivable mesh wavelength-division-multiplexing networks

    NASA Astrophysics Data System (ADS)

    Luo, Hongbin; Li, Lemin; Yu, Hongfang

    2006-12-01

    Wavelength-division-multiplexing (WDM) technology is expected to facilitate bandwidth-intensive multicast applications such as high-definition television. A single fiber cut in a WDM mesh network, however, can disrupt the dissemination of information to several destinations on a light-tree based multicast session. Thus it is imperative to protect multicast sessions by reserving redundant resources. We propose a novel and efficient algorithm for protecting light-trees in survivable WDM mesh networks. The algorithm is called segment-based protection with sister node first (SSNF), whose basic idea is to protect a light-tree using a set of backup segments with a higher priority to protect the segments from a branch point to its children (sister nodes). The SSNF algorithm differs from the segment protection scheme proposed in the literature in how the segments are identified and protected. Our objective is to minimize the network resources used for protecting each primary light-tree such that the blocking probability can be minimized. To verify the effectiveness of the SSNF algorithm, we conduct extensive simulation experiments. The simulation results demonstrate that the SSNF algorithm outperforms existing algorithms for the same problem.

  16. A Computationally Efficient Mel-Filter Bank VAD Algorithm for Distributed Speech Recognition Systems

    NASA Astrophysics Data System (ADS)

    Vlaj, Damjan; Kotnik, Bojan; Horvat, Bogomir; Kačič, Zdravko

    2005-12-01

    This paper presents a novel computationally efficient voice activity detection (VAD) algorithm and emphasizes the importance of such algorithms in distributed speech recognition (DSR) systems. When using VAD algorithms in telecommunication systems, the required capacity of the speech transmission channel can be reduced if only the speech parts of the signal are transmitted. A similar objective can be adopted in DSR systems, where the nonspeech parameters are not sent over the transmission channel. A novel approach is proposed for VAD decisions based on mel-filter bank (MFB) outputs with the so-called Hangover criterion. Comparative tests are presented between the presented MFB VAD algorithm and three VAD algorithms used in the G.729, G.723.1, and DSR (advanced front-end) Standards. These tests were made on the Aurora 2 database, with different signal-to-noise (SNRs) ratios. In the speech recognition tests, the proposed MFB VAD outperformed all the three VAD algorithms used in the standards by [InlineEquation not available: see fulltext.] relative (G.723.1 VAD), by [InlineEquation not available: see fulltext.] relative (G.729 VAD), and by [InlineEquation not available: see fulltext.] relative (DSR VAD) in all SNRs.

  17. An Enhanced Artificial Bee Colony Algorithm with Solution Acceptance Rule and Probabilistic Multisearch.

    PubMed

    Yurtkuran, Alkın; Emel, Erdal

    2016-01-01

    The artificial bee colony (ABC) algorithm is a popular swarm based technique, which is inspired from the intelligent foraging behavior of honeybee swarms. This paper proposes a new variant of ABC algorithm, namely, enhanced ABC with solution acceptance rule and probabilistic multisearch (ABC-SA) to address global optimization problems. A new solution acceptance rule is proposed where, instead of greedy selection between old solution and new candidate solution, worse candidate solutions have a probability to be accepted. Additionally, the acceptance probability of worse candidates is nonlinearly decreased throughout the search process adaptively. Moreover, in order to improve the performance of the ABC and balance the intensification and diversification, a probabilistic multisearch strategy is presented. Three different search equations with distinctive characters are employed using predetermined search probabilities. By implementing a new solution acceptance rule and a probabilistic multisearch approach, the intensification and diversification performance of the ABC algorithm is improved. The proposed algorithm has been tested on well-known benchmark functions of varying dimensions by comparing against novel ABC variants, as well as several recent state-of-the-art algorithms. Computational results show that the proposed ABC-SA outperforms other ABC variants and is superior to state-of-the-art algorithms proposed in the literature. PMID:26819591

  18. Feature weighted naïve Bayes algorithm for information retrieval of enterprise systems

    NASA Astrophysics Data System (ADS)

    Wang, Li; Ji, Ping; Qi, Jing; Shan, Siqing; Bi, Zhuming; Deng, Weiguo; Zhang, Naijing

    2014-01-01

    Automated information retrieval is critical for enterprise information systems to acquire knowledge from the vast amount of data sets. One challenge in information retrieval is text classification. Current practices rely heavily on the classical naïve Bayes algorithm due to its simplicity and robustness. However, results from this algorithm are not always satisfactory. In this article, the limitations of the naïve Bayes algorithm are discussed, and it is found that the assumption on the independence of terms is the main reason for an unsatisfactory classification in many real-world applications. To overcome the limitations, the dependent factors are considered by integrating a term frequency-inverse document frequency (TF-IDF) weighting algorithm in the naïve Bayes classification. Moreover, the TF-IDF algorithm itself is improved so that both frequencies and distribution information are taken into consideration. To illustrate the effectiveness of the proposed method, two simulation experiments were conducted, and the comparisons with other classification methods have shown that the proposed method has outperformed other existing algorithms in terms of precision and index recall rate.

  19. Classifying scaled and rotated textures using a region-matched algorithm

    NASA Astrophysics Data System (ADS)

    Yao, Chih-Chia; Chen, Yu-Tin

    2012-07-01

    A novel method to correct texture variations resulting from scale magnification, narrowing caused by cropping into the original size, or spatial rotation is discussed. The variations usually occur in images captured by a camera using different focal lengths. A representative region-matched algorithm is developed to improve texture classification after magnification, narrowing, and spatial rotation. By using a minimum ellipse, a representative region-matched algorithm encloses a specific region extracted by the J-image segmentation algorithm. After translating the coordinates, the equation of an ellipse in the rotated texture can be formulated as that of an ellipse in the original texture. The rotated invariant property of ellipse provides an efficient method to identify the rotated texture. Additionally, the scale-variant representative region can be classified by adopting scale-invariant parameters. Moreover, a hybrid texture filter is developed. In the hybrid texture filter, the scheme of texture feature extraction includes the Gabor wavelet and the representative region-matched algorithm. Support vector machines are introduced as the classifier. The proposed hybrid texture filter performs excellently with respect to classifying both the stochastic and structural textures. Furthermore, experimental results demonstrate that the proposed algorithm outperforms conventional design algorithms.

  20. A hybrid algorithm for robust acoustic source localization in noisy and reverberant environments

    NASA Astrophysics Data System (ADS)

    Rajagopalan, Ramesh; Dessonville, Timothy

    2014-09-01

    Acoustic source localization using microphone arrays is widely used in videoconferencing and surveillance systems. However, it still remains a challenging task to develop efficient algorithms for accurate estimation of source location using distributed data processing. In this work, we propose a new algorithm for efficient localization of a speaker in noisy and reverberant environments such as videoconferencing. We propose a hybrid algorithm that combines generalized cross correlation based phase transform method (GCC-PHAT) and Tabu search to obtain a robust and accurate estimate of the speaker location. The Tabu Search algorithm iteratively improves the time difference of arrival (TDOA) estimate of GCC-PHAT by examining the neighboring solutions until a convergence in the TDOA value is obtained. Experiments were performed based on real world data recorded from a meeting room in the presence of noise such as computer and fans. Our results demonstrate that the proposed hybrid algorithm outperforms GCC-PHAT especially when the noise level is high. This shows the robustness of the proposed algorithm in noisy and realistic videoconferencing systems.

  1. Inferring functional connectivity in MRI using Bayesian network structure learning with a modified PC algorithm.

    PubMed

    Iyer, Swathi P; Shafran, Izhak; Grayson, David; Gates, Kathleen; Nigg, Joel T; Fair, Damien A

    2013-07-15

    Resting state functional connectivity MRI (rs-fcMRI) is a popular technique used to gauge the functional relatedness between regions in the brain for typical and special populations. Most of the work to date determines this relationship by using Pearson's correlation on BOLD fMRI timeseries. However, it has been recognized that there are at least two key limitations to this method. First, it is not possible to resolve the direct and indirect connections/influences. Second, the direction of information flow between the regions cannot be differentiated. In the current paper, we follow-up on recent work by Smith et al. (2011), and apply PC algorithm to both simulated data and empirical data to determine whether these two factors can be discerned with group average, as opposed to single subject, functional connectivity data. When applied on simulated individual subjects, the algorithm performs well determining indirect and direct connection but fails in determining directionality. However, when applied at group level, PC algorithm gives strong results for both indirect and direct connections and the direction of information flow. Applying the algorithm on empirical data, using a diffusion-weighted imaging (DWI) structural connectivity matrix as the baseline, the PC algorithm outperformed the direct correlations. We conclude that, under certain conditions, the PC algorithm leads to an improved estimate of brain network structure compared to the traditional connectivity analysis based on correlations. PMID:23501054

  2. Improved progressive TIN densification filtering algorithm for airborne LiDAR data in forested areas

    NASA Astrophysics Data System (ADS)

    Zhao, Xiaoqian; Guo, Qinghua; Su, Yanjun; Xue, Baolin

    2016-07-01

    Filtering of light detection and ranging (LiDAR) data into the ground and non-ground points is a fundamental step in processing raw airborne LiDAR data. This paper proposes an improved progressive triangulated irregular network (TIN) densification (IPTD) filtering algorithm that can cope with a variety of forested landscapes, particularly both topographically and environmentally complex regions. The IPTD filtering algorithm consists of three steps: (1) acquiring potential ground seed points using the morphological method; (2) obtaining accurate ground seed points; and (3) building a TIN-based model and iteratively densifying TIN. The IPTD filtering algorithm was tested in 15 forested sites with various terrains (i.e., elevation and slope) and vegetation conditions (i.e., canopy cover and tree height), and was compared with seven other commonly used filtering algorithms (including morphology-based, slope-based, and interpolation-based filtering algorithms). Results show that the IPTD achieves the highest filtering accuracy for nine of the 15 sites. In general, it outperforms the other filtering algorithms, yielding the lowest average total error of 3.15% and the highest average kappa coefficient of 89.53%.

  3. A MPR optimization algorithm for FSO communication system with star topology

    NASA Astrophysics Data System (ADS)

    Zhao, Linlin; Chi, Xuefen; Li, Peng; Guan, Lin

    2015-12-01

    In this paper, we introduce the multi-packet reception (MPR) technology to the outdoor free space optical (FSO) communication system to provide excellent throughput gain. Hence, we address two challenges: how to realize the MPR technology in the varying atmospheric turbulence channel and how to adjust the MPR capability to support as many devices transmitting simultaneously as possible in the system with bit error rate (BER) constraints. Firstly, we explore the reliability ordering with minimum mean square error successive interference cancellation (RO-MMSE-SIC) algorithm to realize the MPR technology in the FSO communication system and derive the closed-form BER expression of the RO-MMSE-SIC algorithm. Then, based on the derived BER expression, we propose the adaptive MPR capability optimization algorithm so that the MPR capability is adapted to different turbulence channel states. Consequently, the excellent throughput gain is obtained in the varying atmospheric channel. The simulation results show that our RO-MMSE-SIC algorithm outperforms the conventional MMSE-SIC algorithm. And the derived exact BER expression is verified by Monte Carlo simulations. The validity and the indispensability of the proposed adaptive MPR capability optimization algorithm are verified as well.

  4. A Hybrid Algorithm for Missing Data Imputation and Its Application to Electrical Data Loggers.

    PubMed

    Turrado, Concepción Crespo; Sánchez Lasheras, Fernando; Calvo-Rollé, José Luis; Piñón-Pazos, Andrés-José; Melero, Manuel G; de Cos Juez, Francisco Javier

    2016-01-01

    The storage of data is a key process in the study of electrical power networks related to the search for harmonics and the finding of a lack of balance among phases. The presence of missing data of any of the main electrical variables (phase-to-neutral voltage, phase-to-phase voltage, current in each phase and power factor) affects any time series study in a negative way that has to be addressed. When this occurs, missing data imputation algorithms are required. These algorithms are able to substitute the data that are missing for estimated values. This research presents a new algorithm for the missing data imputation method based on Self-Organized Maps Neural Networks and Mahalanobis distances and compares it not only with a well-known technique called Multivariate Imputation by Chained Equations (MICE) but also with an algorithm previously proposed by the authors called Adaptive Assignation Algorithm (AAA). The results obtained demonstrate how the proposed method outperforms both algorithms. PMID:27626419

  5. A Two-Pass Exact Algorithm for Selection on Parallel Disk Systems

    PubMed Central

    Mi, Tian; Rajasekaran, Sanguthevar

    2014-01-01

    Numerous OLAP queries process selection operations of “top N”, median, “top 5%”, in data warehousing applications. Selection is a well-studied problem that has numerous applications in the management of data and databases since, typically, any complex data query can be reduced to a series of basic operations such as sorting and selection. The parallel selection has also become an important fundamental operation, especially after parallel databases were introduced. In this paper, we present a deterministic algorithm Recursive Sampling Selection (RSS) to solve the exact out-of-core selection problem, which we show needs no more than (2 + ε) passes (ε being a very small fraction). We have compared our RSS algorithm with two other algorithms in the literature, namely, the Deterministic Sampling Selection and QuickSelect on the Parallel Disks Systems. Our analysis shows that DSS is a (2 + ε)-pass algorithm when the total number of input elements N is a polynomial in the memory size M (i.e., N = Mc for some constant c). While, our proposed algorithm RSS runs in (2 + ε) passes without any assumptions. Experimental results indicate that both RSS and DSS outperform QuickSelect on the Parallel Disks Systems. Especially, the proposed algorithm RSS is more scalable and robust to handle big data when the input size is far greater than the core memory size, including the case of N ≫ Mc. PMID:25374478

  6. An efficient central DOA tracking algorithm for multiple incoherently distributed sources

    NASA Astrophysics Data System (ADS)

    Hassen, Sonia Ben; Samet, Abdelaziz

    2015-12-01

    In this paper, we develop a new tracking method for the direction of arrival (DOA) parameters assuming multiple incoherently distributed (ID) sources. The new approach is based on a simple covariance fitting optimization technique exploiting the central and noncentral moments of the source angular power densities to estimate the central DOAs. The current estimates are treated as measurements provided to the Kalman filter that model the dynamic property of directional changes for the moving sources. Then, the covariance-fitting-based algorithm and the Kalman filtering theory are combined to formulate an adaptive tracking algorithm. Our algorithm is compared to the fast approximated power iteration-total least square-estimation of signal parameters via rotational invariance technique (FAPI-TLS-ESPRIT) algorithm using the TLS-ESPRIT method and the subspace updating via FAPI-algorithm. It will be shown that the proposed algorithm offers an excellent DOA tracking performance and outperforms the FAPI-TLS-ESPRIT method especially at low signal-to-noise ratio (SNR) values. Moreover, the performances of the two methods increase as the SNR values increase. This increase is more prominent with the FAPI-TLS-ESPRIT method. However, their performances degrade when the number of sources increases. It will be also proved that our method depends on the form of the angular distribution function when tracking the central DOAs. Finally, it will be shown that the more the sources are spaced, the more the proposed method can exactly track the DOAs.

  7. Use of Algorithm of Changes for Optimal Design of Heat Exchanger

    NASA Astrophysics Data System (ADS)

    Tam, S. C.; Tam, H. K.; Chio, C. H.; Tam, L. M.

    2010-05-01

    For economic reasons, the optimal design of heat exchanger is required. Design of heat exchanger is usually based on the iterative process. The design conditions, equipment geometries, the heat transfer and friction factor correlations are totally involved in the process. Using the traditional iterative method, many trials are needed for satisfying the compromise between the heat exchange performance and the cost consideration. The process is cumbersome and the optimal design is often depending on the design engineer's experience. Therefore, in the recent studies, many researchers, reviewed in [1], applied the genetic algorithm (GA) [2] for designing the heat exchanger. The results outperformed the traditional method. In this study, the alternative approach, algorithm of changes, is proposed for optimal design of shell-tube heat exchanger [3]. This new method, algorithm of changes based on I Ching (???), is developed originality by the author. In the algorithms, the hexagram operations in I Ching has been generalized to binary string case and the iterative procedure which imitates the I Ching inference is also defined. On the basis of [3], the shell inside diameter, tube outside diameter, and baffles spacing were treated as the design (or optimized) variables. The cost of the heat exchanger was arranged as the objective function. Through the case study, the results show that the algorithm of changes is comparable to the GA method. Both of method can find the optimal solution in a short time. However, without interchanging information between binary strings, the algorithm of changes has advantage on parallel computation over GA.

  8. Self-training in significance space of support vectors for imbalanced biomedical event data

    PubMed Central

    2015-01-01

    Background Pairwise relationships extracted from biomedical literature are insufficient in formulating biomolecular interactions. Extraction of complex relations (namely, biomedical events) has become the main focus of the text-mining community. However, there are two critical issues that are seldom dealt with by existing systems. First, an annotated corpus for training a prediction model is highly imbalanced. Second, supervised models trained on only a single annotated corpus can limit system performance. Fortunately, there is a large pool of unlabeled data containing much of the domain background that one can exploit. Results In this study, we develop a new semi-supervised learning method to address the issues outlined above. The proposed algorithm efficiently exploits the unlabeled data to leverage system performance. We furthermore extend our algorithm to a two-phase learning framework. The first phase balances the training data for initial model induction. The second phase incorporates domain knowledge into the event extraction model. The effectiveness of our method is evaluated on the Genia event extraction corpus and a PubMed document pool. Our method can identify a small subset of the majority class, which is sufficient for building a well-generalized prediction model. It outperforms the traditional self-training algorithm in terms of f-measure. Our model, based on the training data and the unlabeled data pool, achieves comparable performance to the state-of-the-art systems that are trained on a larger annotated set consisting of training and evaluation data. PMID:25952719

  9. Project resource reallocation algorithm

    NASA Technical Reports Server (NTRS)

    Myers, J. E.

    1981-01-01

    A methodology for adjusting baseline cost estimates according to project schedule changes is described. An algorithm which performs a linear expansion or contraction of the baseline project resource distribution in proportion to the project schedule expansion or contraction is presented. Input to the algorithm consists of the deck of cards (PACE input data) prepared for the baseline project schedule as well as a specification of the nature of the baseline schedule change. Output of the algorithm is a new deck of cards with all work breakdown structure block and element of cost estimates redistributed for the new project schedule. This new deck can be processed through PACE to produce a detailed cost estimate for the new schedule.

  10. Optical rate sensor algorithms

    NASA Technical Reports Server (NTRS)

    Uhde-Lacovara, Jo A.

    1989-01-01

    Optical sensors, in particular Charge Coupled Device (CCD) arrays, will be used on Space Station to track stars in order to provide inertial attitude reference. Algorithms are presented to derive attitude rate from the optical sensors. The first algorithm is a recursive differentiator. A variance reduction factor (VRF) of 0.0228 was achieved with a rise time of 10 samples. A VRF of 0.2522 gives a rise time of 4 samples. The second algorithm is based on the direct manipulation of the pixel intensity outputs of the sensor. In 1-dimensional simulations, the derived rate was with 0.07 percent of the actual rate in the presence of additive Gaussian noise with a signal to noise ratio of 60 dB.

  11. Power spectral estimation algorithms

    NASA Technical Reports Server (NTRS)

    Bhatia, Manjit S.

    1989-01-01

    Algorithms to estimate the power spectrum using Maximum Entropy Methods were developed. These algorithms were coded in FORTRAN 77 and were implemented on the VAX 780. The important considerations in this analysis are: (1) resolution, i.e., how close in frequency two spectral components can be spaced and still be identified; (2) dynamic range, i.e., how small a spectral peak can be, relative to the largest, and still be observed in the spectra; and (3) variance, i.e., how accurate the estimate of the spectra is to the actual spectra. The application of the algorithms based on Maximum Entropy Methods to a variety of data shows that these criteria are met quite well. Additional work in this direction would help confirm the findings. All of the software developed was turned over to the technical monitor. A copy of a typical program is included. Some of the actual data and graphs used on this data are also included.

  12. Molecular Motors: Power Strokes Outperform Brownian Ratchets.

    PubMed

    Wagoner, Jason A; Dill, Ken A

    2016-07-01

    Molecular motors convert chemical energy (typically from ATP hydrolysis) to directed motion and mechanical work. Their actions are often described in terms of "Power Stroke" (PS) and "Brownian Ratchet" (BR) mechanisms. Here, we use a transition-state model and stochastic thermodynamics to describe a range of mechanisms ranging from PS to BR. We incorporate this model into Hill's diagrammatic method to develop a comprehensive model of motor processivity that is simple but sufficiently general to capture the full range of behavior observed for molecular motors. We demonstrate that, under all conditions, PS motors are faster, more powerful, and more efficient at constant velocity than BR motors. We show that these differences are very large for simple motors but become inconsequential for complex motors with additional kinetic barrier steps. PMID:27136319

  13. Programming parallel vision algorithms

    SciTech Connect

    Shapiro, L.G.

    1988-01-01

    Computer vision requires the processing of large volumes of data and requires parallel architectures and algorithms to be useful in real-time, industrial applications. The INSIGHT dataflow language was designed to allow encoding of vision algorithms at all levels of the computer vision paradigm. INSIGHT programs, which are relational in nature, can be translated into a graph structure that represents an architecture for solving a particular vision problem or a configuration of a reconfigurable computational network. The authors consider here INSIGHT programs that produce a parallel net architecture for solving low-, mid-, and high-level vision tasks.

  14. New Effective Multithreaded Matching Algorithms

    SciTech Connect

    Manne, Fredrik; Halappanavar, Mahantesh

    2014-05-19

    Matching is an important combinatorial problem with a number of applications in areas such as community detection, sparse linear algebra, and network alignment. Since computing optimal matchings can be very time consuming, several fast approximation algorithms, both sequential and parallel, have been suggested. Common to the algorithms giving the best solutions is that they tend to be sequential by nature, while algorithms more suitable for parallel computation give solutions of less quality. We present a new simple 1 2 -approximation algorithm for the weighted matching problem. This algorithm is both faster than any other suggested sequential 1 2 -approximation algorithm on almost all inputs and also scales better than previous multithreaded algorithms. We further extend this to a general scalable multithreaded algorithm that computes matchings of weight comparable with the best sequential algorithms. The performance of the suggested algorithms is documented through extensive experiments on different multithreaded architectures.

  15. A systematic comparison of genome-scale clustering algorithms

    PubMed Central

    2012-01-01

    Background A wealth of clustering algorithms has been applied to gene co-expression experiments. These algorithms cover a broad range of approaches, from conventional techniques such as k-means and hierarchical clustering, to graphical approaches such as k-clique communities, weighted gene co-expression networks (WGCNA) and paraclique. Comparison of these methods to evaluate their relative effectiveness provides guidance to algorithm selection, development and implementation. Most prior work on comparative clustering evaluation has focused on parametric methods. Graph theoretical methods are recent additions to the tool set for the global analysis and decomposition of microarray co-expression matrices that have not generally been included in earlier methodological comparisons. In the present study, a variety of parametric and graph theoretical clustering algorithms are compared using well-characterized transcriptomic data at a genome scale from Saccharomyces cerevisiae. Methods For each clustering method under study, a variety of parameters were tested. Jaccard similarity was used to measure each cluster's agreement with every GO and KEGG annotation set, and the highest Jaccard score was assigned to the cluster. Clusters were grouped into small, medium, and large bins, and the Jaccard score of the top five scoring clusters in each bin were averaged and reported as the best average top 5 (BAT5) score for the particular method. Results Clusters produced by each method were evaluated based upon the positive match to known pathways. This produces a readily interpretable ranking of the relative effectiveness of clustering on the genes. Methods were also tested to determine whether they were able to identify clusters consistent with those identified by other clustering methods. Conclusions Validation of clusters against known gene classifications demonstrate that for this data, graph-based techniques outperform conventional clustering approaches, suggesting that further

  16. Improved local linearization algorithm for solving the quaternion equations

    NASA Technical Reports Server (NTRS)

    Yen, K.; Cook, G.

    1980-01-01

    The objective of this paper is to develop a new and more accurate local linearization algorithm for numerically solving sets of linear time-varying differential equations. Of special interest is the application of this algorithm to the quaternion rate equations. The results are compared, both analytically and experimentally, with previous results using local linearization methods. The new algorithm requires approximately one-third more calculations per step than the previously developed local linearization algorithm; however, this disadvantage could be reduced by using parallel implementation. For some cases the new algorithm yields significant improvement in accuracy, even with an enlarged sampling interval. The reverse is true in other cases. The errors depend on the values of angular velocity, angular acceleration, and integration step size. One important result is that for the worst case the new algorithm can guarantee eigenvalues nearer the region of stability than can the previously developed algorithm.

  17. The infrared moving object detection and security detection related algorithms based on W4 and frame difference

    NASA Astrophysics Data System (ADS)

    Yin, Jiale; Liu, Lei; Li, He; Liu, Qiankun

    2016-07-01

    This paper presents the infrared moving object detection and security detection related algorithms in video surveillance based on the classical W4 and frame difference algorithm. Classical W4 algorithm is one of the powerful background subtraction algorithms applying to infrared images which can accurately, integrally and quickly detect moving object. However, the classical W4 algorithm can only overcome the deficiency in the slight movement of background. The error will become bigger and bigger for long-term surveillance system since the background model is unchanged once established. In this paper, we present the detection algorithm based on the classical W4 and frame difference. It cannot only overcome the shortcoming of falsely detecting because of state mutations from background, but also eliminate holes caused by frame difference. Based on these we further design various security detection related algorithms such as illegal intrusion alarm, illegal persistence alarm and illegal displacement alarm. We compare our method with the classical W4, frame difference, and other state-of-the-art methods. Experiments detailed in this paper show the method proposed in this paper outperforms the classical W4 and frame difference and serves well for the security detection related algorithms.

  18. Visual Servoing of Quadrotor Micro-Air Vehicle Using Color-Based Tracking Algorithm

    NASA Astrophysics Data System (ADS)

    Azrad, Syaril; Kendoul, Farid; Nonami, Kenzo

    This paper describes a vision-based tracking system using an autonomous Quadrotor Unmanned Micro-Aerial Vehicle (MAV). The vision-based control system relies on color target detection and tracking algorithm using integral image, Kalman filters for relative pose estimation, and a nonlinear controller for the MAV stabilization and guidance. The vision algorithm relies on information from a single onboard camera. An arbitrary target can be selected in real-time from the ground control station, thereby outperforming template and learning-based approaches. Experimental results obtained from outdoor flight tests, showed that the vision-control system enabled the MAV to track and hover above the target as long as the battery is available. The target does not need to be pre-learned, or a template for detection. The results from image processing are sent to navigate a non-linear controller designed for the MAV by the researchers in our group.

  19. An ensemble of k-nearest neighbours algorithm for detection of Parkinson's disease

    NASA Astrophysics Data System (ADS)

    Gök, Murat

    2015-04-01

    Parkinson's disease is a disease of the central nervous system that leads to severe difficulties in motor functions. Developing computational tools for recognition of Parkinson's disease at the early stages is very desirable for alleviating the symptoms. In this paper, we developed a discriminative model based on a selected feature subset and applied several classifier algorithms in the context of disease detection. All classifier performances from the point of both stand-alone and rotation-forest ensemble approach were evaluated on a Parkinson's disease data-set according to a blind testing protocol. The new method compared to hitherto methods outperforms the state-of-the-art in terms of both predictions of accuracy (98.46%) and area under receiver operating characteristic curve (0.99) scores applying rotation-forest ensemble k-nearest neighbour classifier algorithm.

  20. Optree: a learning-based adaptive watershed algorithm for neuron segmentation.

    PubMed

    Uzunbaş, Mustafa Gökhan; Chen, Chao; Metaxas, Dimitris

    2014-01-01

    We present a new algorithm for automatic and interactive segmentation of neuron structures from electron microscopy (EM) images. Our method selects a collection of nodes from the watershed mergng tree as the proposed segmentation. This is achieved by building a onditional random field (CRF) whose underlying graph is the merging tree. The maximum a posteriori (MAP) prediction of the CRF is the output segmentation. Our algorithm outperforms state-of-the-art methods. Both the inference and the training are very efficient as the graph is tree-structured. Furthermore, we develop an interactive segmentation framework which selects uncertain regions for a user to proofread. The uncertainty is measured by the marginals of the graphical model. Based on user corrections, our framework modifies the merging tree and thus improves the segmentation globally. PMID:25333106

  1. Comparisons of four approximation algorithms for large-scale linkage map construction

    PubMed Central

    Jenkins, Johnie N.; McCarty, Jack C.; Lou, Xiang-Yang

    2011-01-01

    Efficient construction of large-scale linkage maps is highly desired in current gene mapping projects. To evaluate the performance of available approaches in the literature, four published methods, the insertion (IN), seriation (SER), neighbor mapping (NM), and unidirectional growth (UG) were compared on the basis of simulated F2 data with various population sizes, interferences, missing genotype rates, and mis-genotyping rates. Simulation results showed that the IN method outperformed, or at least was comparable to, the other three methods. These algorithms were also applied to a real data set and results showed that the linkage order obtained by the IN algorithm was superior to the other methods. Thus, this study suggests that the IN method should be used when constructing large-scale linkage maps. PMID:21611760

  2. Optimization of hybrid laminated composites using the multi-objective gravitational search algorithm (MOGSA)

    NASA Astrophysics Data System (ADS)

    Hemmatian, Hossein; Fereidoon, Abdolhossein; Assareh, Ehsanolah

    2014-09-01

    The multi-objective gravitational search algorithm (MOGSA) technique is applied to hybrid laminates to achieve minimum weight and cost. The investigated laminate is made of glass-epoxy and carbon-epoxy plies to combine the economical attributes of the first with the light weight and high-stiffness properties of the second in order to make the trade-off between the cost and weight as the objective functions. The first natural flexural frequency was considered as a constraint. The results obtained using the MOGSA, including the Pareto set, optimum stacking sequences and number of plies made of either glass or carbon fibres, were compared with those using the genetic algorithm (GA) and ant colony optimization (ACO) reported in the literature. The comparisons confirmed the advantages of hybridization and showed that the MOGSA outperformed the GA and ACO in terms of the functions' value and constraint accuracy.

  3. Robotic Follow Algorithm

    2005-03-30

    The Robotic Follow Algorithm enables allows any robotic vehicle to follow a moving target while reactively choosing a route around nearby obstacles. The robotic follow behavior can be used with different camera systems and can be used with thermal or visual tracking as well as other tracking methods such as radio frequency tags.

  4. Data Structures and Algorithms.

    ERIC Educational Resources Information Center

    Wirth, Niklaus

    1984-01-01

    Built-in data structures are the registers and memory words where binary values are stored; hard-wired algorithms are the fixed rules, embodied in electronic logic circuits, by which stored data are interpreted as instructions to be executed. Various topics related to these two basic elements of every computer program are discussed. (JN)

  5. General cardinality genetic algorithms

    PubMed

    Koehler; Bhattacharyya; Vose

    1997-01-01

    A complete generalization of the Vose genetic algorithm model from the binary to higher cardinality case is provided. Boolean AND and EXCLUSIVE-OR operators are replaced by multiplication and addition over rings of integers. Walsh matrices are generalized with finite Fourier transforms for higher cardinality usage. Comparison of results to the binary case are provided. PMID:10021767

  6. The Lure of Algorithms

    ERIC Educational Resources Information Center

    Drake, Michael

    2011-01-01

    One debate that periodically arises in mathematics education is the issue of how to teach calculation more effectively. "Modern" approaches seem to initially favour mental calculation, informal methods, and the development of understanding before introducing written forms, while traditionalists tend to champion particular algorithms. The debate is…

  7. The Xmath Integration Algorithm

    ERIC Educational Resources Information Center

    Bringslid, Odd

    2009-01-01

    The projects Xmath (Bringslid and Canessa, 2002) and dMath (Bringslid, de la Villa and Rodriguez, 2007) were supported by the European Commission in the so called Minerva Action (Xmath) and The Leonardo da Vinci programme (dMath). The Xmath eBook (Bringslid, 2006) includes algorithms into a wide range of undergraduate mathematical issues embedded…

  8. A hierarchical exact accelerated stochastic simulation algorithm

    PubMed Central

    Orendorff, David; Mjolsness, Eric

    2012-01-01

    A new algorithm, “HiER-leap” (hierarchical exact reaction-leaping), is derived which improves on the computational properties of the ER-leap algorithm for exact accelerated simulation of stochastic chemical kinetics. Unlike ER-leap, HiER-leap utilizes a hierarchical or divide-and-conquer organization of reaction channels into tightly coupled “blocks” and is thereby able to speed up systems with many reaction channels. Like ER-leap, HiER-leap is based on the use of upper and lower bounds on the reaction propensities to define a rejection sampling algorithm with inexpensive early rejection and acceptance steps. But in HiER-leap, large portions of intra-block sampling may be done in parallel. An accept/reject step is used to synchronize across blocks. This method scales well when many reaction channels are present and has desirable asymptotic properties. The algorithm is exact, parallelizable and achieves a significant speedup over the stochastic simulation algorithm and ER-leap on certain problems. This algorithm offers a potentially important step towards efficient in silico modeling of entire organisms. PMID:23231214

  9. TrackEye tracking algorithm characterization

    NASA Astrophysics Data System (ADS)

    Valley, Michael T.; Shields, Robert W.; Reed, Jack M.

    2004-10-01

    TrackEye is a film digitization and target tracking system that offers the potential for quantitatively measuring the dynamic state variables (e.g., absolute and relative position, orientation, linear and angular velocity/acceleration, spin rate, trajectory, angle of attack, etc.) for moving objects using captured single or dual view image sequences. At the heart of the system is a set of tracking algorithms that automatically find and quantify the location of user selected image details such as natural test article features or passive fiducials that have been applied to cooperative test articles. This image position data is converted into real world coordinates and rates with user specified information such as the image scale and frame rate. Though tracking methods such as correlation algorithms are typically robust by nature, the accuracy and suitability of each TrackEye tracking algorithm is in general unknown even under good imaging conditions. The challenges of optimal algorithm selection and algorithm performance/measurement uncertainty are even more significant for long range tracking of high-speed targets where temporally varying atmospheric effects degrade the imagery. This paper will present the preliminary results from a controlled test sequence used to characterize the performance of the TrackEye tracking algorithm suite.

  10. TIRS stray light correction: algorithms and performance

    NASA Astrophysics Data System (ADS)

    Gerace, Aaron; Montanaro, Matthew; Beckmann, Tim; Tyrrell, Kaitlin; Cozzo, Alexandra; Carney, Trevor; Ngan, Vicki

    2015-09-01

    The Thermal Infrared Sensor (TIRS) onboard Landsat 8 was tasked with continuing thermal band measurements of the Earth as part of the Landsat program. From first light in early 2013, there were obvious indications that stray light was contaminating the thermal image data collected from the instrument. Traditional calibration techniques did not perform adequately as non-uniform banding was evident in the corrected data and error in absolute estimates of temperature over trusted buoys sites varied seasonally and, in worst cases, exceeded 9 K error. The development of an operational technique to remove the effects of the stray light has become a high priority to enhance the utility of the TIRS data. This paper introduces the current algorithm being tested by Landsat's calibration and validation team to remove stray light from TIRS image data. The integration of the algorithm into the EROS test system is discussed with strategies for operationalizing the method emphasized. Techniques for assessing the methodologies used are presented and potential refinements to the algorithm are suggested. Initial results indicate that the proposed algorithm significantly removes stray light artifacts from the image data. Specifically, visual and quantitative evidence suggests that the algorithm practically eliminates banding in the image data. Additionally, the seasonal variation in absolute errors is flattened and, in the worst case, errors of over 9 K are reduced to within 2 K. Future work focuses on refining the algorithm based on these findings and applying traditional calibration techniques to enhance the final image product.

  11. TrackEye tracking algorithm characterization.

    SciTech Connect

    Reed, Jack W.; Shields, Rob W; Valley, Michael T.

    2004-08-01

    TrackEye is a film digitization and target tracking system that offers the potential for quantitatively measuring the dynamic state variables (e.g., absolute and relative position, orientation, linear and angular velocity/acceleration, spin rate, trajectory, angle of attack, etc.) for moving objects using captured single or dual view image sequences. At the heart of the system is a set of tracking algorithms that automatically find and quantify the location of user selected image details such as natural test article features or passive fiducials that have been applied to cooperative test articles. This image position data is converted into real world coordinates and rates with user specified information such as the image scale and frame rate. Though tracking methods such as correlation algorithms are typically robust by nature, the accuracy and suitability of each TrackEye tracking algorithm is in general unknown even under good imaging conditions. The challenges of optimal algorithm selection and algorithm performance/measurement uncertainty are even more significant for long range tracking of high-speed targets where temporally varying atmospheric effects degrade the imagery. This paper will present the preliminary results from a controlled test sequence used to characterize the performance of the TrackEye tracking algorithm suite.

  12. Performance Trend of Different Algorithms for Structural Design Optimization

    NASA Technical Reports Server (NTRS)

    Patnaik, Surya N.; Coroneos, Rula M.; Guptill, James D.; Hopkins, Dale A.

    1996-01-01

    Nonlinear programming algorithms play an important role in structural design optimization. Fortunately, several algorithms with computer codes are available. At NASA Lewis Research Center, a project was initiated to assess performance of different optimizers through the development of a computer code CometBoards. This paper summarizes the conclusions of that research. CometBoards was employed to solve sets of small, medium and large structural problems, using different optimizers on a Cray-YMP8E/8128 computer. The reliability and efficiency of the optimizers were determined from the performance of these problems. For small problems, the performance of most of the optimizers could be considered adequate. For large problems however, three optimizers (two sequential quadratic programming routines, DNCONG of IMSL and SQP of IDESIGN, along with the sequential unconstrained minimizations technique SUMT) outperformed others. At optimum, most optimizers captured an identical number of active displacement and frequency constraints but the number of active stress constraints differed among the optimizers. This discrepancy can be attributed to singularity conditions in the optimization and the alleviation of this discrepancy can improve the efficiency of optimizers.

  13. A novel surface defect inspection algorithm for magnetic tile

    NASA Astrophysics Data System (ADS)

    Xie, Luofeng; Lin, Lijun; Yin, Ming; Meng, Lintao; Yin, Guofu

    2016-07-01

    In this paper, we propose a defect extraction method for magnetic tile images based on the shearlet transform. The shearlet transform is a method of multi-scale geometric analysis. Compared with similar methods, the shearlet transform offers higher directional sensitivity and this is useful to accurately extract geometric characteristics from data. In general, a magnetic tile image captured by CCD camera mainly consists of target area, background. Our strategy for extracting the surface defects of magnetic tile comprises two steps: image preprocessing and defect extraction. Both steps are critical. After preprocessing the image, we extract the target area. Due to the low contrast in the magnetic tile image, we apply the discrete shearlet transform to enhance the contrast between the defect area and the normal area. Next, we apply a threshold method to generate a binary image. To validate our algorithm, we compare our experimental results with Otsu method, the curvelet transform and the nonsubsampled contourlet transform. Results show that our algorithm outperforms the other methods considered and can very effectively extract defects.

  14. Comparative Evaluation of Different Optimization Algorithms for Structural Design Applications

    NASA Technical Reports Server (NTRS)

    Patnaik, Surya N.; Coroneos, Rula M.; Guptill, James D.; Hopkins, Dale A.

    1996-01-01

    Non-linear programming algorithms play an important role in structural design optimization. Fortunately, several algorithms with computer codes are available. At NASA Lewis Research Centre, a project was initiated to assess the performance of eight different optimizers through the development of a computer code CometBoards. This paper summarizes the conclusions of that research. CometBoards was employed to solve sets of small, medium and large structural problems, using the eight different optimizers on a Cray-YMP8E/8128 computer. The reliability and efficiency of the optimizers were determined from the performance of these problems. For small problems, the performance of most of the optimizers could be considered adequate. For large problems, however, three optimizers (two sequential quadratic programming routines, DNCONG of IMSL and SQP of IDESIGN, along with Sequential Unconstrained Minimizations Technique SUMT) outperformed others. At optimum, most optimizers captured an identical number of active displacement and frequency constraints but the number of active stress constraints differed among the optimizers. This discrepancy can be attributed to singularity conditions in the optimization and the alleviation of this discrepancy can improve the efficiency of optimizers.

  15. 31 CFR 561.328 - Reduce significantly, significantly reduced, and significant reduction.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 31 Money and Finance:Treasury 3 2013-07-01 2013-07-01 false Reduce significantly, significantly reduced, and significant reduction. 561.328 Section 561.328 Money and Finance: Treasury Regulations... IRANIAN FINANCIAL SANCTIONS REGULATIONS General Definitions § 561.328 Reduce significantly,...

  16. 31 CFR 561.328 - Reduce significantly, significantly reduced, and significant reduction.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 31 Money and Finance:Treasury 3 2014-07-01 2014-07-01 false Reduce significantly, significantly reduced, and significant reduction. 561.328 Section 561.328 Money and Finance: Treasury Regulations... IRANIAN FINANCIAL SANCTIONS REGULATIONS General Definitions § 561.328 Reduce significantly,...

  17. "Clinical" Significance: "Clinical" Significance and "Practical" Significance are NOT the Same Things

    ERIC Educational Resources Information Center

    Peterson, Lisa S.

    2008-01-01

    Clinical significance is an important concept in research, particularly in education and the social sciences. The present article first compares clinical significance to other measures of "significance" in statistics. The major methods used to determine clinical significance are explained and the strengths and weaknesses of clinical significance…

  18. Efficient Homotopy Continuation Algorithms with Application to Computational Fluid Dynamics

    NASA Astrophysics Data System (ADS)

    Brown, David A.

    New homotopy continuation algorithms are developed and applied to a parallel implicit finite-difference Newton-Krylov-Schur external aerodynamic flow solver for the compressible Euler, Navier-Stokes, and Reynolds-averaged Navier-Stokes equations with the Spalart-Allmaras one-equation turbulence model. Many new analysis tools, calculations, and numerical algorithms are presented for the study and design of efficient and robust homotopy continuation algorithms applicable to solving very large and sparse nonlinear systems of equations. Several specific homotopies are presented and studied and a methodology is presented for assessing the suitability of specific homotopies for homotopy continuation. . A new class of homotopy continuation algorithms, referred to as monolithic homotopy continuation algorithms, is developed. These algorithms differ from classical predictor-corrector algorithms by combining the predictor and corrector stages into a single update, significantly reducing the amount of computation and avoiding wasted computational effort resulting from over-solving in the corrector phase. The new algorithms are also simpler from a user perspective, with fewer input parameters, which also improves the user's ability to choose effective parameters on the first flow solve attempt. Conditional convergence is proved analytically and studied numerically for the new algorithms. The performance of a fully-implicit monolithic homotopy continuation algorithm is evaluated for several inviscid, laminar, and turbulent flows over NACA 0012 airfoils and ONERA M6 wings. The monolithic algorithm is demonstrated to be more efficient than the predictor-corrector algorithm for all applications investigated. It is also demonstrated to be more efficient than the widely-used pseudo-transient continuation algorithm for all inviscid and laminar cases investigated, and good performance scaling with grid refinement is demonstrated for the inviscid cases. Performance is also demonstrated

  19. Diagnosis of Brain Metastases from Lung Cancer Using a Modified Electromagnetism like Mechanism Algorithm.

    PubMed

    Chen, Kun-Huang; Wang, Kung-Jeng; Adrian, Angelia Melani; Wang, Kung-Min; Teng, Nai-Chia

    2016-01-01

    Brain metastases are commonly found in patients that are diagnosed with primary malignancy on their lung. Lung cancer patients with brain metastasis tend to have a poor survivability, which is less than 6 months in median. Therefore, an early and effective detection system for such disease is needed to help prolong the patients' survivability and improved their quality of life. A modified electromagnetism-like mechanism (EM) algorithm, MEM-SVM, is proposed by combining EM algorithm with support vector machine (SVM) as the classifier and opposite sign test (OST) as the local search technique. The proposed method is applied to 44 UCI and IDA datasets, and 5 cancers microarray datasets as preliminary experiment. In addition, this method is tested on 4 lung cancer microarray public dataset. Further, we tested our method on a nationwide dataset of brain metastasis from lung cancer (BMLC) in Taiwan. Since the nature of real medical dataset to be highly imbalanced, the synthetic minority over-sampling technique (SMOTE) is utilized to handle this problem. The proposed method is compared against another 8 popular benchmark classifiers and feature selection methods. The performance evaluation is based on the accuracy and Kappa index. For the 44 UCI and IDA datasets and 5 cancer microarray datasets, a non-parametric statistical test confirmed that MEM-SVM outperformed the other methods. For the 4 lung cancer public microarray datasets, MEM-SVM still achieved the highest mean value for accuracy and Kappa index. Due to the imbalanced property on the real case of BMLC dataset, all methods achieve good accuracy without significance difference among the methods. However, on the balanced BMLC dataset, MEM-SVM appears to be the best method with higher accuracy and Kappa index. We successfully developed MEM-SVM to predict the occurrence of brain metastasis from lung cancer with the combination of SMOTE technique to handle the class imbalance properties. The results confirmed that MEM

  20. Face recognition algorithms surpass humans matching faces over changes in illumination.

    PubMed

    O'Toole, Alice J; Jonathon Phillips, P; Jiang, Fang; Ayyad, Janet; Penard, Nils; Abdi, Hervé

    2007-09-01

    There has been significant progress in improving the performance of computer-based face recognition algorithms over the last decade. Although algorithms have been tested and compared extensively with each other, there has been remarkably little work comparing the accuracy of computer-based face recognition systems with humans. We compared seven state-of-the-art face recognition algorithms with humans on a facematching task. Humans and algorithms determined whether pairs of face images, taken under different illumination conditions, were pictures of the same person or of different people. Three algorithms surpassed human performance matching face pairs prescreened to be "difficult" and six algorithms surpassed humans on "easy" face pairs. Although illumination variation continues to challenge face recognition algorithms, current algorithms compete favorably with humans. The superior performance of the best algorithms over humans, in light of the absolute performance levels of the algorithms, underscores the need to compare algorithms with the best current control--humans. PMID:17627050

  1. A novel kernel extreme learning machine algorithm based on self-adaptive artificial bee colony optimisation strategy

    NASA Astrophysics Data System (ADS)

    Ma, Chao; Ouyang, Jihong; Chen, Hui-Ling; Ji, Jin-Chao

    2016-04-01

    In this paper, we propose a novel learning algorithm, named SABC-MKELM, based on a kernel extreme learning machine (KELM) method for single-hidden-layer feedforward networks. In SABC-MKELM, the combination of Gaussian kernels is used as the activate function of KELM instead of simple fixed kernel learning, where the related parameters of kernels and the weights of kernels can be optimised by a novel self-adaptive artificial bee colony (SABC) approach simultaneously. SABC-MKELM outperforms six other state-of-the-art approaches in general, as it could effectively determine solution updating strategies and suitable parameters to produce a flexible kernel function involved in SABC. Simulations have demonstrated that the proposed algorithm not only self-adaptively determines suitable parameters and solution updating strategies learning from the previous experiences, but also achieves better generalisation performances than several related methods, and the results show good stability of the proposed algorithm.

  2. The evaluation of the OSGLR algorithm for restructurable controls

    NASA Technical Reports Server (NTRS)

    Bonnice, W. F.; Wagner, E.; Hall, S. R.; Motyka, P.

    1986-01-01

    The detection and isolation of commercial aircraft control surface and actuator failures using the orthogonal series generalized likelihood ratio (OSGLR) test was evaluated. The OSGLR algorithm was chosen as the most promising algorithm based on a preliminary evaluation of three failure detection and isolation (FDI) algorithms (the detection filter, the generalized likelihood ratio test, and the OSGLR test) and a survey of the literature. One difficulty of analytic FDI techniques and the OSGLR algorithm in particular is their sensitivity to modeling errors. Therefore, methods of improving the robustness of the algorithm were examined with the incorporation of age-weighting into the algorithm being the most effective approach, significantly reducing the sensitivity of the algorithm to modeling errors. The steady-state implementation of the algorithm based on a single cruise linear model was evaluated using a nonlinear simulation of a C-130 aircraft. A number of off-nominal no-failure flight conditions including maneuvers, nonzero flap deflections, different turbulence levels and steady winds were tested. Based on the no-failure decision functions produced by off-nominal flight conditions, the failure detection performance at the nominal flight condition was determined. The extension of the algorithm to a wider flight envelope by scheduling the linear models used by the algorithm on dynamic pressure and flap deflection was also considered. Since simply scheduling the linear models over the entire flight envelope is unlikely to be adequate, scheduling of the steady-state implentation of the algorithm was briefly investigated.

  3. Genetic Algorithms and Local Search

    NASA Technical Reports Server (NTRS)

    Whitley, Darrell

    1996-01-01

    The first part of this presentation is a tutorial level introduction to the principles of genetic search and models of simple genetic algorithms. The second half covers the combination of genetic algorithms with local search methods to produce hybrid genetic algorithms. Hybrid algorithms can be modeled within the existing theoretical framework developed for simple genetic algorithms. An application of a hybrid to geometric model matching is given. The hybrid algorithm yields results that improve on the current state-of-the-art for this problem.

  4. Reactive Collision Avoidance Algorithm

    NASA Technical Reports Server (NTRS)

    Scharf, Daniel; Acikmese, Behcet; Ploen, Scott; Hadaegh, Fred

    2010-01-01

    The reactive collision avoidance (RCA) algorithm allows a spacecraft to find a fuel-optimal trajectory for avoiding an arbitrary number of colliding spacecraft in real time while accounting for acceleration limits. In addition to spacecraft, the technology can be used for vehicles that can accelerate in any direction, such as helicopters and submersibles. In contrast to existing, passive algorithms that simultaneously design trajectories for a cluster of vehicles working to achieve a common goal, RCA is implemented onboard spacecraft only when an imminent collision is detected, and then plans a collision avoidance maneuver for only that host vehicle, thus preventing a collision in an off-nominal situation for which passive algorithms cannot. An example scenario for such a situation might be when a spacecraft in the cluster is approaching another one, but enters safe mode and begins to drift. Functionally, the RCA detects colliding spacecraft, plans an evasion trajectory by solving the Evasion Trajectory Problem (ETP), and then recovers after the collision is avoided. A direct optimization approach was used to develop the algorithm so it can run in real time. In this innovation, a parameterized class of avoidance trajectories is specified, and then the optimal trajectory is found by searching over the parameters. The class of trajectories is selected as bang-off-bang as motivated by optimal control theory. That is, an avoiding spacecraft first applies full acceleration in a constant direction, then coasts, and finally applies full acceleration to stop. The parameter optimization problem can be solved offline and stored as a look-up table of values. Using a look-up table allows the algorithm to run in real time. Given a colliding spacecraft, the properties of the collision geometry serve as indices of the look-up table that gives the optimal trajectory. For multiple colliding spacecraft, the set of trajectories that avoid all spacecraft is rapidly searched on

  5. Robust Blind Learning Algorithm for Nonlinear Equalization Using Input Decision Information.

    PubMed

    Xu, Lu; Huang, Defeng David; Guo, Yingjie Jay

    2015-12-01

    In this paper, we propose a new blind learning algorithm, namely, the Benveniste-Goursat input-output decision (BG-IOD), to enhance the convergence performance of neural network-based equalizers for nonlinear channel equalization. In contrast to conventional blind learning algorithms, where only the output of the equalizer is employed for updating system parameters, the BG-IOD exploits a new type of extra information, the input decision information obtained from the input of the equalizer, to mitigate the influence of the nonlinear equalizer structure on parameters learning, thereby leading to improved convergence performance. We prove that, with the input decision information, a desirable convergence capability that the output symbol error rate (SER) is always less than the input SER if the input SER is below a threshold, can be achieved. Then, the BG soft-switching technique is employed to combine the merits of both input and output decision information, where the former is used to guarantee SER convergence and the latter is to improve SER performance. Simulation results show that the proposed algorithm outperforms conventional blind learning algorithms, such as stochastic quadratic distance and dual mode constant modulus algorithm, in terms of both convergence performance and SER performance, for nonlinear equalization. PMID:25706894

  6. Simple and Efficient Algorithm for Improving the MDL Estimator of the Number of Sources

    PubMed Central

    Guimarães, Dayan A.; de Souza, Rausley A. A.

    2014-01-01

    We propose a simple algorithm for improving the MDL (minimum description length) estimator of the number of sources of signals impinging on multiple sensors. The algorithm is based on the norms of vectors whose elements are the normalized and nonlinearly scaled eigenvalues of the received signal covariance matrix and the corresponding normalized indexes. Such norms are used to discriminate the largest eigenvalues from the remaining ones, thus allowing for the estimation of the number of sources. The MDL estimate is used as the input data of the algorithm. Numerical results unveil that the so-called norm-based improved MDL (iMDL) algorithm can achieve performances that are better than those achieved by the MDL estimator alone. Comparisons are also made with the well-known AIC (Akaike information criterion) estimator and with a recently-proposed estimator based on the random matrix theory (RMT). It is shown that our algorithm can also outperform the AIC and the RMT-based estimator in some situations. PMID:25330050

  7. An online spike detection and spike classification algorithm capable of instantaneous resolution of overlapping spikes

    PubMed Central

    Natora, Michal; Boucsein, Clemens; Munk, Matthias H. J.; Obermayer, Klaus

    2009-01-01

    For the analysis of neuronal cooperativity, simultaneously recorded extracellular signals from neighboring neurons need to be sorted reliably by a spike sorting method. Many algorithms have been developed to this end, however, to date, none of them manages to fulfill a set of demanding requirements. In particular, it is desirable to have an algorithm that operates online, detects and classifies overlapping spikes in real time, and that adapts to non-stationary data. Here, we present a combined spike detection and classification algorithm, which explicitly addresses these issues. Our approach makes use of linear filters to find a new representation of the data and to optimally enhance the signal-to-noise ratio. We introduce a method called “Deconfusion” which de-correlates the filter outputs and provides source separation. Finally, a set of well-defined thresholds is applied and leads to simultaneous spike detection and spike classification. By incorporating a direct feedback, the algorithm adapts to non-stationary data and is, therefore, well suited for acute recordings. We evaluate our method on simulated and experimental data, including simultaneous intra/extra-cellular recordings made in slices of a rat cortex and recordings from the prefrontal cortex of awake behaving macaques. We compare the results to existing spike detection as well as spike sorting methods. We conclude that our algorithm meets all of the mentioned requirements and outperforms other methods under realistic signal-to-noise ratios and in the presence of overlapping spikes. PMID:19499318

  8. A graph isomorphism algorithm using signatures computed via quantum walk search model

    NASA Astrophysics Data System (ADS)

    Wang, Huiquan; Wu, Junjie; Yang, Xuejun; Yi, Xun

    2015-03-01

    In this paper, we propose a new algorithm based on a quantum walk search model to distinguish strongly similar graphs. Our algorithm computes a signature for each graph via the quantum walk search model and uses signatures to distinguish non-isomorphic graphs. Our method is less complex than those of previous works. In addition, our algorithm can be extended by raising the signature levels. The higher the level adopted, the stronger the distinguishing ability and the higher the complexity of the algorithm. Our algorithm was tested with standard benchmarks from four databases. We note that the weakest signature at level 1 can distinguish all similar graphs, with a time complexity of O({{N}3.5}), which that outperforms the previous best work except when it comes to strongly regular graphs (SRGs). Once the signature is raised to level 3, all SRGs tested can be distinguished successfully. In this case, the time complexity is O({{N}5.5}), also better than the previous best work.

  9. 31 CFR 561.404 - Significant transaction or transactions; significant financial services; significant financial...

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 31 Money and Finance:Treasury 3 2013-07-01 2013-07-01 false Significant transaction or transactions; significant financial services; significant financial transaction. 561.404 Section 561.404 Money and Finance: Treasury Regulations Relating to Money and Finance (Continued) OFFICE OF FOREIGN ASSETS CONTROL, DEPARTMENT OF THE TREASURY...

  10. CAVITY CONTROL ALGORITHM

    SciTech Connect

    Tomasz Plawski, J. Hovater

    2010-09-01

    A digital low level radio frequency (RF) system typically incorporates either a heterodyne or direct sampling technique, followed by fast ADCs, then an FPGA, and finally a transmitting DAC. This universal platform opens up the possibilities for a variety of control algorithm implementations. The foremost concern for an RF control system is cavity field stability, and to meet the required quality of regulation, the chosen control system needs to have sufficient feedback gain. In this paper we will investigate the effectiveness of the regulation for three basic control system algorithms: I&Q (In-phase and Quadrature), Amplitude & Phase and digital SEL (Self Exciting Loop) along with the example of the Jefferson Lab 12 GeV cavity field control system.

  11. Comparing barrier algorithms

    NASA Technical Reports Server (NTRS)

    Arenstorf, Norbert S.; Jordan, Harry F.

    1987-01-01

    A barrier is a method for synchronizing a large number of concurrent computer processes. After considering some basic synchronization mechanisms, a collection of barrier algorithms with either linear or logarithmic depth are presented. A graphical model is described that profiles the execution of the barriers and other parallel programming constructs. This model shows how the interaction between the barrier algorithms and the work that they synchronize can impact their performance. One result is that logarithmic tree structured barriers show good performance when synchronizing fixed length work, while linear self-scheduled barriers show better performance when synchronizing fixed length work with an imbedded critical section. The linear barriers are better able to exploit the process skew associated with critical sections. Timing experiments, performed on an eighteen processor Flex/32 shared memory multiprocessor, that support these conclusions are detailed.

  12. Algorithms, games, and evolution

    PubMed Central

    Chastain, Erick; Livnat, Adi; Papadimitriou, Christos; Vazirani, Umesh

    2014-01-01

    Even the most seasoned students of evolution, starting with Darwin himself, have occasionally expressed amazement that the mechanism of natural selection has produced the whole of Life as we see it around us. There is a computational way to articulate the same amazement: “What algorithm could possibly achieve all this in a mere three and a half billion years?” In this paper we propose an answer: We demonstrate that in the regime of weak selection, the standard equations of population genetics describing natural selection in the presence of sex become identical to those of a repeated game between genes played according to multiplicative weight updates (MWUA), an algorithm known in computer science to be surprisingly powerful and versatile. MWUA maximizes a tradeoff between cumulative performance and entropy, which suggests a new view on the maintenance of diversity in evolution. PMID:24979793

  13. Comparing barrier algorithms

    NASA Technical Reports Server (NTRS)

    Arenstorf, Norbert S.; Jordan, Harry F.

    1989-01-01

    A barrier is a method for synchronizing a large number of concurrent computer processes. After considering some basic synchronization mechanisms, a collection of barrier algorithms with either linear or logarithmic depth are presented. A graphical model is described that profiles the execution of the barriers and other parallel programming constructs. This model shows how the interaction between the barrier algorithms and the work that they synchronize can impact their performance. One result is that logarithmic tree structured barriers show good performance when synchronizing fixed length work, while linear self-scheduled barriers show better performance when synchronizing fixed length work with an imbedded critical section. The linear barriers are better able to exploit the process skew associated with critical sections. Timing experiments, performed on an eighteen processor Flex/32 shared memory multiprocessor that support these conclusions, are detailed.

  14. The Relegation Algorithm

    NASA Astrophysics Data System (ADS)

    Deprit, André; Palacián, Jesúus; Deprit, Etienne

    2001-03-01

    The relegation algorithm extends the method of normalization by Lie transformations. Given a Hamiltonian that is a power series ℋ = ℋ0+ ɛℋ1+ ... of a small parameter ɛ, normalization constructs a map which converts the principal part ℋ0into an integral of the transformed system — relegation does the same for an arbitrary function ℋ[G]. If the Lie derivative induced by ℋ[G] is semi-simple, a double recursion produces the generator of the relegating transformation. The relegation algorithm is illustrated with an elementary example borrowed from galactic dynamics; the exercise serves as a standard against which to test software implementations. Relegation is also applied to the more substantial example of a Keplerian system perturbed by radiation pressure emanating from a rotating source.

  15. Genetic Algorithm for Optimization: Preprocessor and Algorithm

    NASA Technical Reports Server (NTRS)

    Sen, S. K.; Shaykhian, Gholam A.

    2006-01-01

    Genetic algorithm (GA) inspired by Darwin's theory of evolution and employed to solve optimization problems - unconstrained or constrained - uses an evolutionary process. A GA has several parameters such the population size, search space, crossover and mutation probabilities, and fitness criterion. These parameters are not universally known/determined a priori for all problems. Depending on the problem at hand, these parameters need to be decided such that the resulting GA performs the best. We present here a preprocessor that achieves just that, i.e., it determines, for a specified problem, the foregoing parameters so that the consequent GA is a best for the problem. We stress also the need for such a preprocessor both for quality (error) and for cost (complexity) to produce the solution. The preprocessor includes, as its first step, making use of all the information such as that of nature/character of the function/system, search space, physical/laboratory experimentation (if already done/available), and the physical environment. It also includes the information that can be generated through any means - deterministic/nondeterministic/graphics. Instead of attempting a solution of the problem straightway through a GA without having/using the information/knowledge of the character of the system, we would do consciously a much better job of producing a solution by using the information generated/created in the very first step of the preprocessor. We, therefore, unstintingly advocate the use of a preprocessor to solve a real-world optimization problem including NP-complete ones before using the statistically most appropriate GA. We also include such a GA for unconstrained function optimization problems.

  16. An efficient algorithm for function optimization: modified stem cells algorithm

    NASA Astrophysics Data System (ADS)

    Taherdangkoo, Mohammad; Paziresh, Mahsa; Yazdi, Mehran; Bagheri, Mohammad

    2013-03-01

    In this paper, we propose an optimization algorithm based on the intelligent behavior of stem cell swarms in reproduction and self-organization. Optimization algorithms, such as the Genetic Algorithm (GA), Particle Swarm Optimization (PSO) algorithm, Ant Colony Optimization (ACO) algorithm and Artificial Bee Colony (ABC) algorithm, can give solutions to linear and non-linear problems near to the optimum for many applications; however, in some case, they can suffer from becoming trapped in local optima. The Stem Cells Algorithm (SCA) is an optimization algorithm inspired by the natural behavior of stem cells in evolving themselves into new and improved cells. The SCA avoids the local optima problem successfully. In this paper, we have made small changes in the implementation of this algorithm to obtain improved performance over previous versions. Using a series of benchmark functions, we assess the performance of the proposed algorithm and compare it with that of the other aforementioned optimization algorithms. The obtained results prove the superiority of the Modified Stem Cells Algorithm (MSCA).

  17. Algorithm Visualization System for Teaching Spatial Data Algorithms

    ERIC Educational Resources Information Center

    Nikander, Jussi; Helminen, Juha; Korhonen, Ari

    2010-01-01

    TRAKLA2 is a web-based learning environment for data structures and algorithms. The system delivers automatically assessed algorithm simulation exercises that are solved using a graphical user interface. In this work, we introduce a novel learning environment for spatial data algorithms, SDA-TRAKLA2, which has been implemented on top of the…

  18. SPA: Solar Position Algorithm

    NASA Astrophysics Data System (ADS)

    Reda, Ibrahim; Andreas, Afshin

    2015-04-01

    The Solar Position Algorithm (SPA) calculates the solar zenith and azimuth angles in the period from the year -2000 to 6000, with uncertainties of +/- 0.0003 degrees based on the date, time, and location on Earth. SPA is implemented in C; in addition to being available for download, an online calculator using this code is available at http://www.nrel.gov/midc/solpos/spa.html.

  19. Quantum defragmentation algorithm

    SciTech Connect

    Burgarth, Daniel; Giovannetti, Vittorio

    2010-08-15

    In this addendum to our paper [D. Burgarth and V. Giovannetti, Phys. Rev. Lett. 99, 100501 (2007)] we prove that during the transformation that allows one to enforce control by relaxation on a quantum system, the ancillary memory can be kept at a finite size, independently from the fidelity one wants to achieve. The result is obtained by introducing the quantum analog of defragmentation algorithms which are employed for efficiently reorganizing classical information in conventional hard disks.

  20. NOSS altimeter algorithm specifications

    NASA Technical Reports Server (NTRS)

    Hancock, D. W.; Forsythe, R. G.; Mcmillan, J. D.

    1982-01-01

    A description of all algorithms required for altimeter processing is given. Each description includes title, description, inputs/outputs, general algebraic sequences and data volume. All required input/output data files are described and the computer resources required for the entire altimeter processing system were estimated. The majority of the data processing requirements for any radar altimeter of the Seasat-1 type are scoped. Additions and deletions could be made for the specific altimeter products required by other projects.

  1. Sarsat location algorithms

    NASA Astrophysics Data System (ADS)

    Nardi, Jerry

    The Satellite Aided Search and Rescue (Sarsat) is designed to detect and locate distress beacons using satellite receivers. Algorithms used for calculating the positions of 406 MHz beacons and 121.5/243 MHz beacons are presented. The techniques for matching, resolving and averaging calculated locations from multiple satellite passes are also described along with results pertaining to single pass and multiple pass location estimate accuracy.

  2. Statistical significance of normalized global alignment.

    PubMed

    Peris, Guillermo; Marzal, Andrés

    2014-03-01

    The comparison of homologous proteins from different species is a first step toward a function assignment and a reconstruction of the species evolution. Though local alignment is mostly used for this purpose, global alignment is important for constructing multiple alignments or phylogenetic trees. However, statistical significance of global alignments is not completely clear, lacking a specific statistical model to describe alignments or depending on computationally expensive methods like Z-score. Recently we presented a normalized global alignment, defined as the best compromise between global alignment cost and length, and showed that this new technique led to better classification results than Z-score at a much lower computational cost. However, it is necessary to analyze the statistical significance of the normalized global alignment in order to be considered a completely functional algorithm for protein alignment. Experiments with unrelated proteins extracted from the SCOP ASTRAL database showed that normalized global alignment scores can be fitted to a log-normal distribution. This fact, obtained without any theoretical support, can be used to derive statistical significance of normalized global alignments. Results are summarized in a table with fitted parameters for different scoring schemes. PMID:24400820

  3. Toward Developing an Unbiased Scoring Algorithm for "NASA" and Similar Ranking Tasks.

    ERIC Educational Resources Information Center

    Lane, Irving M.; And Others

    1981-01-01

    Presents both logical and empirical evidence to illustrate that the conventional scoring algorithm for ranking tasks significantly underestimates the initial level of group ability and that Slevin's alternative scoring algorithm significantly overestimates the initial level of ability. Presents a modification of Slevin's algorithm which authors…

  4. Dynamic Bubble-Check Algorithm for Check Node Processing in Q-Ary LDPC Decoders

    NASA Astrophysics Data System (ADS)

    Lin, Wei; Bai, Baoming; Ma, Xiao; Sun, Rong

    A simplified algorithm for check node processing of extended min-sum (EMS) q-ary LDPC decoders is presented in this letter. Compared with the bubble check algorithm, the so-called dynamic bubble-check (DBC) algorithm aims to further reduce the computational complexity for the elementary check node (ECN) processing. By introducing two flag vectors in ECN processing, The DBC algorithm can use the minimum number of comparisons at each step. Simulation results show that, DBC algorithm uses significantly fewer comparison operations than the bubble check algorithm, and presents no performance loss compared with standard EMS algorithm on AWGN channels.

  5. Nonlinear dynamics optimization with particle swarm and genetic algorithms for SPEAR3 emittance upgrade

    NASA Astrophysics Data System (ADS)

    Huang, Xiaobiao; Safranek, James

    2014-09-01

    Nonlinear dynamics optimization is carried out for a low emittance upgrade lattice of SPEAR3 in order to improve its dynamic aperture and Touschek lifetime. Two multi-objective optimization algorithms, a genetic algorithm and a particle swarm algorithm, are used for this study. The performance of the two algorithms are compared. The result shows that the particle swarm algorithm converges significantly faster to similar or better solutions than the genetic algorithm and it does not require seeding of good solutions in the initial population. These advantages of the particle swarm algorithm may make it more suitable for many accelerator optimization applications.

  6. Algorithms for builder guidelines

    SciTech Connect

    Balcomb, J.D.; Lekov, A.B.

    1989-06-01

    The Builder Guidelines are designed to make simple, appropriate guidelines available to builders for their specific localities. Builders may select from passive solar and conservation strategies with different performance potentials. They can then compare the calculated results for their particular house design with a typical house in the same location. Algorithms used to develop the Builder Guidelines are described. The main algorithms used are the monthly solar ratio (SLR) method for winter heating, the diurnal heat capacity (DHC) method for temperature swing, and a new simplified calculation method (McCool) for summer cooling. This paper applies the algorithms to estimate the performance potential of passive solar strategies, and the annual heating and cooling loads of various combinations of conservation and passive solar strategies. The basis of the McCool method is described. All three methods are implemented in a microcomputer program used to generate the guideline numbers. Guidelines for Denver, Colorado, are used to illustrate the results. The structure of the guidelines and worksheet booklets are also presented. 5 refs., 3 tabs.

  7. RADFLO physics and algorithms

    SciTech Connect

    Symbalisty, E.M.D.; Zinn, J.; Whitaker, R.W.

    1995-09-01

    This paper describes the history, physics, and algorithms of the computer code RADFLO and its extension HYCHEM. RADFLO is a one-dimensional, radiation-transport hydrodynamics code that is used to compute early-time fireball behavior for low-altitude nuclear bursts. The primary use of the code is the prediction of optical signals produced by nuclear explosions. It has also been used to predict thermal and hydrodynamic effects that are used for vulnerability and lethality applications. Another closely related code, HYCHEM, is an extension of RADFLO which includes the effects of nonequilibrium chemistry. Some examples of numerical results will be shown, along with scaling expressions derived from those results. We describe new computations of the structures and luminosities of steady-state shock waves and radiative thermal waves, which have been extended to cover a range of ambient air densities for high-altitude applications. We also describe recent modifications of the codes to use a one-dimensional analog of the CAVEAT fluid-dynamics algorithm in place of the former standard Richtmyer-von Neumann algorithm.

  8. Algorithm for navigated ESS.

    PubMed

    Baudoin, T; Grgić, M V; Zadravec, D; Geber, G; Tomljenović, D; Kalogjera, L

    2013-12-01

    ENT navigation has given new opportunities in performing Endoscopic Sinus Surgery (ESS) and improving surgical outcome of the patients` treatment. ESS assisted by a navigation system could be called Navigated Endoscopic Sinus Surgery (NESS). As it is generally accepted that the NESS should be performed only in cases of complex anatomy and pathology, it has not yet been established as a state-of-the-art procedure and thus not used on a daily basis. This paper presents an algorithm for use of a navigation system for basic ESS in the treatment of chronic rhinosinusitis (CRS). The algorithm includes five units that should be highlighted using a navigation system. They are as follows: 1) nasal vestibule unit, 2) OMC unit, 3) anterior ethmoid unit, 4) posterior ethmoid unit, and 5) sphenoid unit. Each unit has a shape of a triangular pyramid and consists of at least four reference points or landmarks. As many landmarks as possible should be marked when determining one of the five units. Navigated orientation in each unit should always precede any surgical intervention. The algorithm should improve the learning curve of trainees and enable surgeons to use the navigation system routinely and systematically. PMID:24260766

  9. Two-Swim Operators in the Modified Bacterial Foraging Algorithm for the Optimal Synthesis of Four-Bar Mechanisms.

    PubMed

    Hernández-Ocaña, Betania; Pozos-Parra, Ma Del Pilar; Mezura-Montes, Efrén; Portilla-Flores, Edgar Alfredo; Vega-Alvarado, Eduardo; Calva-Yáñez, Maria Bárbara

    2016-01-01

    This paper presents two-swim operators to be added to the chemotaxis process of the modified bacterial foraging optimization algorithm to solve three instances of the synthesis of four-bar planar mechanisms. One swim favors exploration while the second one promotes fine movements in the neighborhood of each bacterium. The combined effect of the new operators looks to increase the production of better solutions during the search. As a consequence, the ability of the algorithm to escape from local optimum solutions is enhanced. The algorithm is tested through four experiments and its results are compared against two BFOA-based algorithms and also against a differential evolution algorithm designed for mechanical design problems. The overall results indicate that the proposed algorithm outperforms other BFOA-based approaches and finds highly competitive mechanisms, with a single set of parameter values and with less evaluations in the first synthesis problem, with respect to those mechanisms obtained by the differential evolution algorithm, which needed a parameter fine-tuning process for each optimization problem. PMID:27057156

  10. Two-Swim Operators in the Modified Bacterial Foraging Algorithm for the Optimal Synthesis of Four-Bar Mechanisms

    PubMed Central

    Hernández-Ocaña, Betania; Pozos-Parra, Ma. Del Pilar; Mezura-Montes, Efrén; Portilla-Flores, Edgar Alfredo; Vega-Alvarado, Eduardo; Calva-Yáñez, Maria Bárbara

    2016-01-01

    This paper presents two-swim operators to be added to the chemotaxis process of the modified bacterial foraging optimization algorithm to solve three instances of the synthesis of four-bar planar mechanisms. One swim favors exploration while the second one promotes fine movements in the neighborhood of each bacterium. The combined effect of the new operators looks to increase the production of better solutions during the search. As a consequence, the ability of the algorithm to escape from local optimum solutions is enhanced. The algorithm is tested through four experiments and its results are compared against two BFOA-based algorithms and also against a differential evolution algorithm designed for mechanical design problems. The overall results indicate that the proposed algorithm outperforms other BFOA-based approaches and finds highly competitive mechanisms, with a single set of parameter values and with less evaluations in the first synthesis problem, with respect to those mechanisms obtained by the differential evolution algorithm, which needed a parameter fine-tuning process for each optimization problem. PMID:27057156

  11. A comprehensive performance evaluation on the prediction results of existing cooperative transcription factors identification algorithms

    PubMed Central

    2014-01-01

    Background Eukaryotic transcriptional regulation is known to be highly connected through the networks of cooperative transcription factors (TFs). Measuring the cooperativity of TFs is helpful for understanding the biological relevance of these TFs in regulating genes. The recent advances in computational techniques led to various predictions of cooperative TF pairs in yeast. As each algorithm integrated different data resources and was developed based on different rationales, it possessed its own merit and claimed outperforming others. However, the claim was prone to subjectivity because each algorithm compared with only a few other algorithms and only used a small set of performance indices for comparison. This motivated us to propose a series of indices to objectively evaluate the prediction performance of existing algorithms. And based on the proposed performance indices, we conducted a comprehensive performance evaluation. Results We collected 14 sets of predicted cooperative TF pairs (PCTFPs) in yeast from 14 existing algorithms in the literature. Using the eight performance indices we adopted/proposed, the cooperativity of each PCTFP was measured and a ranking score according to the mean cooperativity of the set was given to each set of PCTFPs under evaluation for each performance index. It was seen that the ranking scores of a set of PCTFPs vary with different performance indices, implying that an algorithm used in predicting cooperative TF pairs is of strength somewhere but may be of weakness elsewhere. We finally made a comprehensive ranking for these 14 sets. The results showed that Wang J's study obtained the best performance evaluation on the prediction of cooperative TF pairs in yeast. Conclusions In this study, we adopted/proposed eight performance indices to make a comprehensive performance evaluation on the prediction results of 14 existing cooperative TFs identification algorithms. Most importantly, these proposed indices can be easily applied to

  12. Quantum-Inspired Genetic Algorithm or Quantum Genetic Algorithm: Which Is It?

    NASA Astrophysics Data System (ADS)

    Jones, Erika

    2015-04-01

    Our everyday work focuses on genetic algorithms (GAs) related to quantum computing where we call ``related'' algorithms those falling into one of two classes: (1) GAs run on classical computers but making use of quantum mechanical (QM) constructs and (2) GAs run on quantum hardware. Though convention has yet to be set with respect to usage of the accepted terms quantum-inspired genetic algorithm (QIGA) and quantum genetic algorithm (QGA), we find the two terms highly suitable respectively as labels for the aforementioned classes. With these specific definitions in mind, the difference between the QIGA and QGA is greater than might first be appreciated, particularly by those coming from a perspective emphasizing GA use as a general computational tool irrespective of QM aspects (1) suggested by QIGAs and (2) inherent in QGAs. We offer a theoretical standpoint highlighting key differences-both obvious, and more significantly, subtle-to be considered in general design of a QIGA versus that of a QGA.

  13. Faster Parameterized Algorithms for Minor Containment

    NASA Astrophysics Data System (ADS)

    Adler, Isolde; Dorn, Frederic; Fomin, Fedor V.; Sau, Ignasi; Thilikos, Dimitrios M.

    The theory of Graph Minors by Robertson and Seymour is one of the deepest and significant theories in modern Combinatorics. This theory has also a strong impact on the recent development of Algorithms, and several areas, like Parameterized Complexity, have roots in Graph Minors. Until very recently it was a common belief that Graph Minors Theory is mainly of theoretical importance. However, it appears that many deep results from Robertson and Seymour's theory can be also used in the design of practical algorithms. Minor containment testing is one of algorithmically most important and technical parts of the theory, and minor containment in graphs of bounded branchwidth is a basic ingredient of this algorithm. In order to implement minor containment testing on graphs of bounded branchwidth, Hicks [NETWORKS 04] described an algorithm, that in time O(3^{k^2}\\cdot (h+k-1)!\\cdot m) decides if a graph G with m edges and branchwidth k, contains a fixed graph H on h vertices as a minor. That algorithm follows the ideas introduced by Robertson and Seymour in [J'CTSB 95]. In this work we improve the dependence on k of Hicks' result by showing that checking if H is a minor of G can be done in time O(2^{(2k +1 )\\cdot log k} \\cdot h^{2k} \\cdot 2^{2h^2} \\cdot m). Our approach is based on a combinatorial object called rooted packing, which captures the properties of the potential models of subgraphs of H that we seek in our dynamic programming algorithm. This formulation with rooted packings allows us to speed up the algorithm when G is embedded in a fixed surface, obtaining the first single-exponential algorithm for minor containment testing. Namely, it runs in time 2^{O(k)} \\cdot h^{2k} \\cdot 2^{O(h)} \\cdot n, with n = |V(G)|. Finally, we show that slight modifications of our algorithm permit to solve some related problems within the same time bounds, like induced minor or contraction minor containment.

  14. A simulation based approach to optimize inventory replenishment with RAND algorithm: An extended study of corrected demand using Holt's method for textile industry

    NASA Astrophysics Data System (ADS)

    Morshed, Mohammad Sarwar; Kamal, Mostafa Mashnoon; Khan, Somaiya Islam

    2016-07-01

    Inventory has been a major concern in supply chain and numerous researches have been done lately on inventory control which brought forth a number of methods that efficiently manage inventory and related overheads by reducing cost of replenishment. This research is aimed towards providing a better replenishment policy in case of multi-product, single supplier situations for chemical raw materials of textile industries in Bangladesh. It is assumed that industries currently pursue individual replenishment system. The purpose is to find out the optimum ideal cycle time and individual replenishment cycle time of each product for replenishment that will cause lowest annual holding and ordering cost, and also find the optimum ordering quantity. In this paper indirect grouping strategy has been used. It is suggested that indirect grouping Strategy outperforms direct grouping strategy when major cost is high. An algorithm by Kaspi and Rosenblatt (1991) called RAND is exercised for its simplicity and ease of application. RAND provides an ideal cycle time (T) for replenishment and integer multiplier (ki) for individual items. Thus the replenishment cycle time for each product is found as T×ki. Firstly, based on data, a comparison between currently prevailing (individual) process and RAND is provided that uses the actual demands which presents 49% improvement in total cost of replenishment. Secondly, discrepancies in demand is corrected by using Holt's method. However, demands can only be forecasted one or two months into the future because of the demand pattern of the industry under consideration. Evidently, application of RAND with corrected demand display even greater improvement. The results of this study demonstrates that cost of replenishment can be significantly reduced by applying RAND algorithm and exponential smoothing models.

  15. Implementation and analysis of a fast backprojection algorithm

    NASA Astrophysics Data System (ADS)

    Gorham, LeRoy A.; Majumder, Uttam K.; Buxa, Peter; Backues, Mark J.; Lindgren, Andrew C.

    2006-05-01

    The convolution backprojection algorithm is an accurate synthetic aperture radar imaging technique, but it has seen limited use in the radar community due to its high computational costs. Therefore, significant research has been conducted for a fast backprojection algorithm, which surrenders some image quality for increased computational efficiency. This paper describes an implementation of both a standard convolution backprojection algorithm and a fast backprojection algorithm optimized for use on a Linux cluster and a field-programmable gate array (FPGA) based processing system. The performance of the different implementations is compared using synthetic ideal point targets and the SPIE XPatch Backhoe dataset.

  16. A scalable parallel algorithm for multiple objective linear programs

    NASA Technical Reports Server (NTRS)

    Wiecek, Malgorzata M.; Zhang, Hong

    1994-01-01

    This paper presents an ADBASE-based parallel algorithm for solving multiple objective linear programs (MOLP's). Job balance, speedup and scalability are of primary interest in evaluating efficiency of the new algorithm. Implementation results on Intel iPSC/2 and Paragon multiprocessors show that the algorithm significantly speeds up the process of solving MOLP's, which is understood as generating all or some efficient extreme points and unbounded efficient edges. The algorithm gives specially good results for large and very large problems. Motivation and justification for solving such large MOLP's are also included.

  17. The BRUSH algorithm for two-electron integrals on GPU

    NASA Astrophysics Data System (ADS)

    Rák, Ádám; Cserey, György

    2015-02-01

    This Letter presents a new algorithmic method developed to evaluate two-electron repulsion integrals based on contracted Gaussian basis functions in a parallel way. This new algorithm scheme provides distinct SIMD (single instruction multiple data) optimized paths which symbolically transforms integral parameters into target integral algorithms. Our measurements indicate that the method gives a significant improvement over the CPU-friendly PRISM algorithm. The benchmark tests (evaluation of more than 108 integrals using the STO-3G basis set) of our GPU (NVIDIA GTX 780) implementation showed up to 750-fold speedup compared to a single core of Athlon II X4 635 CPU.

  18. A fast algorithm for numerical solutions to Fortet's equation

    NASA Astrophysics Data System (ADS)

    Brumen, Gorazd

    2008-10-01

    A fast algorithm for computation of default times of multiple firms in a structural model is presented. The algorithm uses a multivariate extension of the Fortet's equation and the structure of Toeplitz matrices to significantly improve the computation time. In a financial market consisting of M[not double greater-than sign]1 firms and N discretization points in every dimension the algorithm uses O(nlogn·M·M!·NM(M-1)/2) operations, where n is the number of discretization points in the time domain. The algorithm is applied to firm survival probability computation and zero coupon bond pricing.

  19. Design of robust systolic algorithms

    SciTech Connect

    Varman, P.J.; Fussell, D.S.

    1983-01-01

    A primary reason for the susceptibility of systolic algorithms to faults is their strong dependence on the interconnection between the processors in a systolic array. A technique to transform any linear systolic algorithm into an equivalent pipelined algorithm that executes on arbitrary trees is presented. 5 references.

  20. High-performance combinatorial algorithms

    SciTech Connect

    Pinar, Ali

    2003-10-31

    Combinatorial algorithms have long played an important role in many applications of scientific computing such as sparse matrix computations and parallel computing. The growing importance of combinatorial algorithms in emerging applications like computational biology and scientific data mining calls for development of a high performance library for combinatorial algorithms. Building such a library requires a new structure for combinatorial algorithms research that enables fast implementation of new algorithms. We propose a structure for combinatorial algorithms research that mimics the research structure of numerical algorithms. Numerical algorithms research is nicely complemented with high performance libraries, and this can be attributed to the fact that there are only a small number of fundamental problems that underlie numerical solvers. Furthermore there are only a handful of kernels that enable implementation of algorithms for these fundamental problems. Building a similar structure for combinatorial algorithms will enable efficient implementations for existing algorithms and fast implementation of new algorithms. Our results will promote utilization of combinatorial techniques and will impact research in many scientific computing applications, some of which are listed.

  1. Multipartite entanglement in quantum algorithms

    SciTech Connect

    Bruss, D.; Macchiavello, C.

    2011-05-15

    We investigate the entanglement features of the quantum states employed in quantum algorithms. In particular, we analyze the multipartite entanglement properties in the Deutsch-Jozsa, Grover, and Simon algorithms. Our results show that for these algorithms most instances involve multipartite entanglement.

  2. Algorithm for Constructing Contour Plots

    NASA Technical Reports Server (NTRS)

    Johnson, W.; Silva, F.

    1984-01-01

    General computer algorithm developed for construction of contour plots. algorithm accepts as input data values at set of points irregularly distributed over plane. Algorithm based on interpolation scheme: points in plane connected by straight-line segments to form set of triangles. Program written in FORTRAN IV.

  3. Polynomial Algorithms for Item Matching.

    ERIC Educational Resources Information Center

    Armstrong, Ronald D.; Jones, Douglas H.

    1992-01-01

    Polynomial algorithms are presented that are used to solve selected problems in test theory, and computational results from sample problems with several hundred decision variables are provided that demonstrate the benefits of these algorithms. The algorithms are based on optimization theory in networks (graphs). (SLD)

  4. Verifying a Computer Algorithm Mathematically.

    ERIC Educational Resources Information Center

    Olson, Alton T.

    1986-01-01

    Presents an example of mathematics from an algorithmic point of view, with emphasis on the design and verification of this algorithm. The program involves finding roots for algebraic equations using the half-interval search algorithm. The program listing is included. (JN)

  5. [Orthogonal Vector Projection Algorithm for Spectral Unmixing].

    PubMed

    Song, Mei-ping; Xu, Xing-wei; Chang, Chein-I; An, Ju-bai; Yao, Li

    2015-12-01

    Spectrum unmixing is an important part of hyperspectral technologies, which is essential for material quantity analysis in hyperspectral imagery. Most linear unmixing algorithms require computations of matrix multiplication and matrix inversion or matrix determination. These are difficult for programming, especially hard for realization on hardware. At the same time, the computation costs of the algorithms increase significantly as the number of endmembers grows. Here, based on the traditional algorithm Orthogonal Subspace Projection, a new method called. Orthogonal Vector Projection is prompted using orthogonal principle. It simplifies this process by avoiding matrix multiplication and inversion. It firstly computes the final orthogonal vector via Gram-Schmidt process for each endmember spectrum. And then, these orthogonal vectors are used as projection vector for the pixel signature. The unconstrained abundance can be obtained directly by projecting the signature to the projection vectors, and computing the ratio of projected vector length and orthogonal vector length. Compared to the Orthogonal Subspace Projection and Least Squares Error algorithms, this method does not need matrix inversion, which is much computation costing and hard to implement on hardware. It just completes the orthogonalization process by repeated vector operations, easy for application on both parallel computation and hardware. The reasonability of the algorithm is proved by its relationship with Orthogonal Sub-space Projection and Least Squares Error algorithms. And its computational complexity is also compared with the other two algorithms', which is the lowest one. At last, the experimental results on synthetic image and real image are also provided, giving another evidence for effectiveness of the method. PMID:26964231

  6. Rare Event Detection Algorithm Of Water Quality

    NASA Astrophysics Data System (ADS)

    Ungs, M. J.

    2011-12-01

    A novel method is presented describing the development and implementation of an on-line water quality event detection algorithm. An algorithm was developed to distinguish between normal variation in water quality parameters and changes in these parameters triggered by the presence of contaminant spikes. Emphasis is placed on simultaneously limiting the number of false alarms (which are called false positives) that occur and the number of misses (called false negatives). The problem of excessive false alarms is common to existing change detection algorithms. EPA's standard measure of evaluation for event detection algorithms is to have a false alarm rate of less than 0.5 percent and a false positive rate less than 2 percent (EPA 817-R-07-002). A detailed description of the algorithm's development is presented. The algorithm is tested using historical water quality data collected by a public water supply agency at multiple locations and using spiking contaminants developed by the USEPA, Water Security Division. The water quality parameters of specific conductivity, chlorine residual, total organic carbon, pH, and oxidation reduction potential are considered. Abnormal data sets are generated by superimposing water quality changes on the historical or baseline data. Eddies-ET has defined reaction expressions which specify how the peak or spike concentration of a particular contaminant affects each water quality parameter. Nine default contaminants (Eddies-ET) were previously derived from pipe-loop tests performed at EPA's National Homeland Security Research Center (NHSRC) Test and Evaluation (T&E) Facility. A contaminant strength value of approximately 1.5 is considered to be a significant threat. The proposed algorithm has been able to achieve a combined false alarm rate of less than 0.03 percent for both false positives and for false negatives using contaminant spikes of strength 2 or more.

  7. [Algorithm for treating preoperative anemia].

    PubMed

    Bisbe Vives, E; Basora Macaya, M

    2015-06-01

    Hemoglobin optimization and treatment of preoperative anemia in surgery with a moderate to high risk of surgical bleeding reduces the rate of transfusions and improves hemoglobin levels at discharge and can also improve postoperative outcomes. To this end, we need to schedule preoperative visits sufficiently in advance to treat the anemia. The treatment algorithm we propose comes with a simple checklist to determine whether we should refer the patient to a specialist or if we can treat the patient during the same visit. With the blood count test and additional tests for iron metabolism, inflammation parameter and glomerular filtration rate, we can decide whether to start the treatment with intravenous iron alone or erythropoietin with or without iron. With significant anemia, a visit after 15 days might be necessary to observe the response and supplement the treatment if required. The hemoglobin objective will depend on the type of surgery and the patient's characteristics. PMID:26320341

  8. Novel MRC algorithms using GPGPU

    NASA Astrophysics Data System (ADS)

    Kato, Kokoro; Taniguchi, Yoshiyuki; Inoue, Tadao; Kadota, Kazuya

    2012-06-01

    GPGPU (General Purpose Graphic Processor Unit) has been attracting many engineers and scientists who develop their own software for massive numerical computation. With hundreds of core-processors and tens of thousands of threads operating concurrently, GPGPU programs can run significantly fast if their software architecture is well optimized. The basic program model used in GPGPU is SIMD (Single Instruction Multiple Data stream), and one must adapt his programming model to SIMD. However, conditional branching is fundamentally not allowed in SIMD and this limitation is quite challenging to apply GPGPU to photomask related software such as MDP or MRC. In this paper unique methods are proposed to utilize GPU for MRC operation. We explain novel algorithms of mask layout verification by GPGPU.

  9. Improved Global Ocean Color Using Polymer Algorithm

    NASA Astrophysics Data System (ADS)

    Steinmetz, Francois; Ramon, Didier; Deschamps, ierre-Yves; Stum, Jacques

    2010-12-01

    A global ocean color product has been developed based on the use of the POLYMER algorithm to correct atmospheric scattering and sun glint and to process the data to a Level 2 ocean color product. Thanks to the use of this algorithm, the coverage and accuracy of the MERIS ocean color product have been significantly improved when compared to the standard product, therefore increasing its usefulness for global ocean monitor- ing applications like GLOBCOLOUR. We will present the latest developments of the algorithm, its first application to MODIS data and its validation against in-situ data from the MERMAID database. Examples will be shown of global NRT chlorophyll maps produced by CLS with POLYMER for operational applications like fishing or oil and gas industry, as well as its use by Scripps for a NASA study of the Beaufort and Chukchi seas.

  10. Parallelism of the SANDstorm hash algorithm.

    SciTech Connect

    Torgerson, Mark Dolan; Draelos, Timothy John; Schroeppel, Richard Crabtree

    2009-09-01

    Mainstream cryptographic hashing algorithms are not parallelizable. This limits their speed and they are not able to take advantage of the current trend of being run on multi-core platforms. Being limited in speed limits their usefulness as an authentication mechanism in secure communications. Sandia researchers have created a new cryptographic hashing algorithm, SANDstorm, which was specifically designed to take advantage of multi-core processing and be parallelizable on a wide range of platforms. This report describes a late-start LDRD effort to verify the parallelizability claims of the SANDstorm designers. We have shown, with operating code and bench testing, that the SANDstorm algorithm may be trivially parallelized on a wide range of hardware platforms. Implementations using OpenMP demonstrates a linear speedup with multiple cores. We have also shown significant performance gains with optimized C code and the use of assembly instructions to exploit particular platform capabilities.

  11. An Algorithm Combining for Objective Prediction with Subjective Forecast Information

    NASA Astrophysics Data System (ADS)

    Choi, JunTae; Kim, SooHyun

    2016-04-01

    As direct or post-processed output from numerical weather prediction (NWP) models has begun to show acceptable performance compared with the predictions of human forecasters, many national weather centers have become interested in automatic forecasting systems based on NWP products alone, without intervention from human forecasters. The Korea Meteorological Administration (KMA) is now developing an automatic forecasting system for dry variables. The forecasts are automatically generated from NWP predictions using a post processing model (MOS). However, MOS cannot always produce acceptable predictions, and sometimes its predictions are rejected by human forecasters. In such cases, a human forecaster should manually modify the prediction consistently at points surrounding their corrections, using some kind of smart tool to incorporate the forecaster's opinion. This study introduces an algorithm to revise MOS predictions by adding a forecaster's subjective forecast information at neighbouring points. A statistical relation between two forecast points - a neighbouring point and a dependent point - was derived for the difference between a MOS prediction and that of a human forecaster. If the MOS prediction at a neighbouring point is updated by a human forecaster, the value at a dependent point is modified using a statistical relationship based on linear regression, with parameters obtained from a one-year dataset of MOS predictions and official forecast data issued by KMA. The best sets of neighbouring points and dependent point are statistically selected. According to verification, the RMSE of temperature predictions produced by the new algorithm was slightly lower than that of the original MOS predictions, and close to the RMSE of subjective forecasts. For wind speed and relative humidity, the new algorithm outperformed human forecasters.

  12. LBR: Load Balancing Routing Algorithm for Wireless Sensor Networks

    NASA Astrophysics Data System (ADS)

    Daabaj, Khaled; Dixon, Mike; Koziniec, Terry

    2010-06-01

    Homogeneous wireless sensor networks (WSNs) are organized using identical sensor nodes, but the nature of WSNs operations results in an imbalanced workload on gateway sensor nodes which may lead to a hot-spot or routing hole problem. The routing hole problem can be considered as a natural result of the tree-based routing schemes that are widely used in WSNs, where all nodes construct a multi-hop routing tree to a centralized root, e.g., a gateway or base station. For example, sensor nodes on the routing path and closer to the base station deplete their own energy faster than other nodes, or sensor nodes with the best link state to the base station are overloaded with traffic from the rest of the network and experience a faster energy depletion rate than their peers. Routing protocols for WSNs are reliability-oriented and their use of reliability metric to avoid unreliable links makes the routing scheme worse. However, none of these reliability oriented routing protocols explicitly uses load balancing in their routing schemes. Since improving network lifetime is a fundamental challenge of WSNs, we present, in this chapter, a novel, energy-wise, load balancing routing (LBR) algorithm that addresses load balancing in an energy efficient manner by maintaining a reliable set of parent nodes. This allows sensor nodes to quickly find a new parent upon parent loss due to the existing of node failure or energy hole. The proposed routing algorithm is tested using simulations and the results demonstrate that it outperforms the MultiHopLQI reliability based routing algorithm.

  13. An Improved Back Propagation Neural Network Algorithm on Classification Problems

    NASA Astrophysics Data System (ADS)

    Nawi, Nazri Mohd; Ransing, R. S.; Salleh, Mohd Najib Mohd; Ghazali, Rozaida; Hamid, Norhamreeza Abdul

    The back propagation algorithm is one the most popular algorithms to train feed forward neural networks. However, the convergence of this algorithm is slow, it is mainly because of gradient descent algorithm. Previous research demonstrated that in 'feed forward' algorithm, the slope of the activation function is directly influenced by a parameter referred to as 'gain'. This research proposed an algorithm for improving the performance of the back propagation algorithm by introducing the adaptive gain of the activation function. The gain values change adaptively for each node. The influence of the adaptive gain on the learning ability of a neural network is analysed. Multi layer feed forward neural networks have been assessed. Physical interpretation of the relationship between the gain value and the learning rate and weight values is given. The efficiency of the proposed algorithm is compared with conventional Gradient Descent Method and verified by means of simulation on four classification problems. In learning the patterns, the simulations result demonstrate that the proposed method converged faster on Wisconsin breast cancer with an improvement ratio of nearly 2.8, 1.76 on diabetes problem, 65% better on thyroid data sets and 97% faster on IRIS classification problem. The results clearly show that the proposed algorithm significantly improves the learning speed of the conventional back-propagation algorithm.

  14. Automatic-control-algorithm effects on energy production

    SciTech Connect

    McNerney, G.M.

    1981-01-01

    Algorithm control strategy for unattended wind turbine operation is a potentially important aspect of wind energy production that has thus far escaped treatment in the literature. Early experience in automatic operation of the Sandia 17-m VAWT has demonstrated the need for a systematic study of control algorithms. To this end, a computer model has been developed using actual wind time series and turbine performance data to simulate the power produced by the Sandia 17-m VAWT operating in automatic control. The model has been used to investigate the influence of starting algorithms on annual energy production. The results indicate that, depending on turbine and local wind characteristics, a bad choice of a control algorithm can significantly reduce overall energy production. The model can be used to select control algorithms and threshold parameters that maximize long-term energy production. An attempt has been made to generalize these results from local site and turbine characteristics to obtain general guidelines for control algorithm design.

  15. A set-membership approach to blind channel equalization algorithm

    NASA Astrophysics Data System (ADS)

    Li, Yue-ming

    2013-03-01

    The constant modulus algorithm (CMA) has low computational complexity while presenting slow convergence and possible convergence to local minima, the CMA family of algorithms based on affine projection version (AP-CMA) alleviates the speed limitations of the CMA. However, the computational complexity has been a weak point in the implementation of AP-CMA. To reduce the computational complexity of the adaptive filtering algorithm, a new AP-CMA algorithm based on set membership (SM-AP-CMA) is proposed. The new algorithm combines a bounded error specification on the adaptive filter with the concept of data-reusing. Simulations confirmed that the convergence rate of the proposed algorithm is significantly faster; meanwhile, the excess mean square error can be maintained in a relatively low level and a substantial reduction in the number of updates when compared with its conventional counterpart.

  16. Analysis of image thresholding segmentation algorithms based on swarm intelligence

    NASA Astrophysics Data System (ADS)

    Zhang, Yi; Lu, Kai; Gao, Yinghui; Yang, Bo

    2013-03-01

    Swarm intelligence-based image thresholding segmentation algorithms are playing an important role in the research field of image segmentation. In this paper, we briefly introduce the theories of four existing image segmentation algorithms based on swarm intelligence including fish swarm algorithm, artificial bee colony, bacteria foraging algorithm and particle swarm optimization. Then some image benchmarks are tested in order to show the differences of the segmentation accuracy, time consumption, convergence and robustness for Salt & Pepper noise and Gaussian noise of these four algorithms. Through these comparisons, this paper gives qualitative analyses for the performance variance of the four algorithms. The conclusions in this paper would give a significant guide for the actual image segmentation.

  17. Structure-based algorithms for microvessel classification

    PubMed Central

    Smith, Amy F.; Secomb, Timothy W.; Pries, Axel R.; Smith, Nicolas P.; Shipley, Rebecca J.

    2014-01-01

    Objective Recent developments in high-resolution imaging techniques have enabled digital reconstruction of three-dimensional sections of microvascular networks down to the capillary scale. To better interpret these large data sets, our goal is to distinguish branching trees of arterioles and venules from capillaries. Methods Two novel algorithms are presented for classifying vessels in microvascular anatomical data sets without requiring flow information. The algorithms are compared with a classification based on observed flow directions (considered the gold standard), and with an existing resistance-based method that relies only on structural data. Results The first algorithm, developed for networks with one arteriolar and one venular tree, performs well in identifying arterioles and venules and is robust to parameter changes, but incorrectly labels a significant number of capillaries as arterioles or venules. The second algorithm, developed for networks with multiple inlets and outlets, correctly identifies more arterioles and venules, but is more sensitive to parameter changes. Conclusions The algorithms presented here can be used to classify microvessels in large microvascular data sets lacking flow information. This provides a basis for analyzing the distinct geometrical properties and modelling the functional behavior of arterioles, capillaries and venules. PMID:25403335

  18. Connected-Health Algorithm: Development and Evaluation.

    PubMed

    Vlahu-Gjorgievska, Elena; Koceski, Saso; Kulev, Igor; Trajkovik, Vladimir

    2016-04-01

    Nowadays, there is a growing interest towards the adoption of novel ICT technologies in the field of medical monitoring and personal health care systems. This paper proposes design of a connected health algorithm inspired from social computing paradigm. The purpose of the algorithm is to give a recommendation for performing a specific activity that will improve user's health, based on his health condition and set of knowledge derived from the history of the user and users with similar attitudes to him. The algorithm could help users to have bigger confidence in choosing their physical activities that will improve their health. The proposed algorithm has been experimentally validated using real data collected from a community of 1000 active users. The results showed that the recommended physical activity, contributed towards weight loss of at least 0.5 kg, is found in the first half of the ordered list of recommendations, generated by the algorithm, with the probability > 0.6 with 1 % level of significance. PMID:26922593

  19. Combinatorial Multiobjective Optimization Using Genetic Algorithms

    NASA Technical Reports Server (NTRS)

    Crossley, William A.; Martin. Eric T.

    2002-01-01

    The research proposed in this document investigated multiobjective optimization approaches based upon the Genetic Algorithm (GA). Several versions of the GA have been adopted for multiobjective design, but, prior to this research, there had not been significant comparisons of the most popular strategies. The research effort first generalized the two-branch tournament genetic algorithm in to an N-branch genetic algorithm, then the N-branch GA was compared with a version of the popular Multi-Objective Genetic Algorithm (MOGA). Because the genetic algorithm is well suited to combinatorial (mixed discrete / continuous) optimization problems, the GA can be used in the conceptual phase of design to combine selection (discrete variable) and sizing (continuous variable) tasks. Using a multiobjective formulation for the design of a 50-passenger aircraft to meet the competing objectives of minimizing takeoff gross weight and minimizing trip time, the GA generated a range of tradeoff designs that illustrate which aircraft features change from a low-weight, slow trip-time aircraft design to a heavy-weight, short trip-time aircraft design. Given the objective formulation and analysis methods used, the results of this study identify where turboprop-powered aircraft and turbofan-powered aircraft become more desirable for the 50 seat passenger application. This aircraft design application also begins to suggest how a combinatorial multiobjective optimization technique could be used to assist in the design of morphing aircraft.

  20. SIGNIFICANT NATURAL HERITAGE AREAS (NC)

    EPA Science Inventory

    The North Carolina Department of Environment and Natural Resources, Division of Parks and Recreation, Natural Heritage Program in cooperation with the NC Center for Geographic Information & Analysis, developed the Significant Natural Heritage Areas digital data to determine the a...