Kaddoura, Tarek; Vadlamudi, Karunakar; Kumar, Shine; Bobhate, Prashant; Guo, Long; Jain, Shreepal; Elgendi, Mohamed; Coe, James Y; Kim, Daniel; Taylor, Dylan; Tymchak, Wayne; Schuurmans, Dale; Zemp, Roger J; Adatia, Ian
2016-01-01
We hypothesized that an automated speech- recognition-inspired classification algorithm could differentiate between the heart sounds in subjects with and without pulmonary hypertension (PH) and outperform physicians. Heart sounds, electrocardiograms, and mean pulmonary artery pressures (mPAp) were recorded simultaneously. Heart sound recordings were digitized to train and test speech-recognition-inspired classification algorithms. We used mel-frequency cepstral coefficients to extract features from the heart sounds. Gaussian-mixture models classified the features as PH (mPAp ≥ 25 mmHg) or normal (mPAp < 25 mmHg). Physicians blinded to patient data listened to the same heart sound recordings and attempted a diagnosis. We studied 164 subjects: 86 with mPAp ≥ 25 mmHg (mPAp 41 ± 12 mmHg) and 78 with mPAp < 25 mmHg (mPAp 17 ± 5 mmHg) (p < 0.005). The correct diagnostic rate of the automated speech-recognition-inspired algorithm was 74% compared to 56% by physicians (p = 0.005). The false positive rate for the algorithm was 34% versus 50% (p = 0.04) for clinicians. The false negative rate for the algorithm was 23% and 68% (p = 0.0002) for physicians. We developed an automated speech-recognition-inspired classification algorithm for the acoustic diagnosis of PH that outperforms physicians that could be used to screen for PH and encourage earlier specialist referral. PMID:27609672
Kaddoura, Tarek; Vadlamudi, Karunakar; Kumar, Shine; Bobhate, Prashant; Guo, Long; Jain, Shreepal; Elgendi, Mohamed; Coe, James Y; Kim, Daniel; Taylor, Dylan; Tymchak, Wayne; Schuurmans, Dale; Zemp, Roger J.; Adatia, Ian
2016-01-01
We hypothesized that an automated speech- recognition-inspired classification algorithm could differentiate between the heart sounds in subjects with and without pulmonary hypertension (PH) and outperform physicians. Heart sounds, electrocardiograms, and mean pulmonary artery pressures (mPAp) were recorded simultaneously. Heart sound recordings were digitized to train and test speech-recognition-inspired classification algorithms. We used mel-frequency cepstral coefficients to extract features from the heart sounds. Gaussian-mixture models classified the features as PH (mPAp ≥ 25 mmHg) or normal (mPAp < 25 mmHg). Physicians blinded to patient data listened to the same heart sound recordings and attempted a diagnosis. We studied 164 subjects: 86 with mPAp ≥ 25 mmHg (mPAp 41 ± 12 mmHg) and 78 with mPAp < 25 mmHg (mPAp 17 ± 5 mmHg) (p < 0.005). The correct diagnostic rate of the automated speech-recognition-inspired algorithm was 74% compared to 56% by physicians (p = 0.005). The false positive rate for the algorithm was 34% versus 50% (p = 0.04) for clinicians. The false negative rate for the algorithm was 23% and 68% (p = 0.0002) for physicians. We developed an automated speech-recognition-inspired classification algorithm for the acoustic diagnosis of PH that outperforms physicians that could be used to screen for PH and encourage earlier specialist referral. PMID:27609672
Evaluation of dynamic programming among the existing stereo matching algorithms
NASA Astrophysics Data System (ADS)
Huat, Teo Chee; Manap, Nurulfajar bin Abd
2015-05-01
There are various types of existing stereo matching algorithms on image processing which applied on stereo vision images to get better results of disparity depth map. One of them is the dynamic programming method. On this research is to perform an evaluation on the performance between the dynamic programming with other existing method as comparison. The algorithm used on the dynamic programming is the global optimization which provides better process on stereo images like its accuracy and its computational efficiency compared to other existing stereo matching algorithms. The dynamic programming algorithm used on this research is the current method as its disparity estimates at a particular pixel and all the other pixels unlike the old methods which with scanline based of dynamic programming. There will be details on every existing methods presented on this paper with the comparison between the dynamic programming and the existing methods. This can propose the dynamic programming method to be used on many applications in image processing.
2014-01-01
Background Eukaryotic transcriptional regulation is known to be highly connected through the networks of cooperative transcription factors (TFs). Measuring the cooperativity of TFs is helpful for understanding the biological relevance of these TFs in regulating genes. The recent advances in computational techniques led to various predictions of cooperative TF pairs in yeast. As each algorithm integrated different data resources and was developed based on different rationales, it possessed its own merit and claimed outperforming others. However, the claim was prone to subjectivity because each algorithm compared with only a few other algorithms and only used a small set of performance indices for comparison. This motivated us to propose a series of indices to objectively evaluate the prediction performance of existing algorithms. And based on the proposed performance indices, we conducted a comprehensive performance evaluation. Results We collected 14 sets of predicted cooperative TF pairs (PCTFPs) in yeast from 14 existing algorithms in the literature. Using the eight performance indices we adopted/proposed, the cooperativity of each PCTFP was measured and a ranking score according to the mean cooperativity of the set was given to each set of PCTFPs under evaluation for each performance index. It was seen that the ranking scores of a set of PCTFPs vary with different performance indices, implying that an algorithm used in predicting cooperative TF pairs is of strength somewhere but may be of weakness elsewhere. We finally made a comprehensive ranking for these 14 sets. The results showed that Wang J's study obtained the best performance evaluation on the prediction of cooperative TF pairs in yeast. Conclusions In this study, we adopted/proposed eight performance indices to make a comprehensive performance evaluation on the prediction results of 14 existing cooperative TFs identification algorithms. Most importantly, these proposed indices can be easily applied to
Maliwichi, Madalitso; Rosenberg, Nora E.; Macfie, Rebekah; Olson, Dan; Hoffman, Irving; van der Horst, Charles M.; Kazembe, Peter N.; Hosseinipour, Mina C.; McCollum, Eric D.
2014-01-01
Objective To determine, for the WHO algorithm for point-of-care diagnosis of HIV infection, the agreement levels between pediatricians and non-physician clinicians, and to compare sensitivity and specificity profiles of the WHO algorithm and different CD4 thresholds against HIV PCR testing in hospitalized Malawian infants. Methods In 2011, hospitalized HIV-exposed infants <12 months in Lilongwe, Malawi were evaluated independently with the WHO algorithm by both a pediatrician and clinical officer. Blood was collected for CD4 and molecular HIV testing (DNA or RNA PCR). Using molecular testing as the reference, sensitivity, specificity, and positive predictive value (PPV) were determined for the WHO algorithm and CD4 count thresholds of 1500 and 2000 cells/mm3 by pediatricians and clinical officers. Results We enrolled 166 infants (50% female, 34% <2 months, 37% HIV-infected). Sensitivity was higher using CD4 thresholds (<1500, 80%; <2000, 95%) than with the algorithm (physicians, 57%; clinical officers, 71%). Specificity was comparable for CD4 thresholds (<1500, 68%, <2000, 50%) and the algorithm (pediatricians, 55%, clinical officers, 50%). The positive predictive values were slightly better using CD4 thresholds (<1500, 59%, <2000, 52%) than the algorithm (pediatricians, 43%, clinical officers 45%) at this prevalence. Conclusion Performance by the WHO algorithm and CD4 thresholds resulted in many misclassifications. Point-of-care CD4 thresholds of <1500 cells/mm3 or <2000 cells/mm3 could identify more HIV-infected infants with fewer false positives than the algorithm. However, a point-of-care option with better performance characteristics is needed for accurate, timely HIV diagnosis. PMID:24754543
2014-01-01
Background Drug discovery, disease detection, and personalized medicine are fast-growing areas of genomic research. With the advancement of next-generation sequencing techniques, researchers can obtain an abundance of data for many different biological assays in a short period of time. When this data is error-free, the result is a high-quality base-pair resolution picture of the genome. However, when the data is lossy the heuristic algorithms currently used when aligning next-generation sequences causes the corresponding accuracy to drop. Results This paper describes a program, ADaM (APF DNA Mapper) which significantly increases final alignment accuracy. ADaM works by first using an existing program to align "easy" sequences, and then using an algorithm with accuracy guarantees (the APF) to align the remaining sequences. The final result is a technique that increases the mapping accuracy from only 60% to over 90% for harder-to-align sequences. PMID:25079667
Bush, Keith; Cisler, Josh
2013-01-01
Neuroimaging methodology predominantly relies on the blood oxygenation level dependent (BOLD) signal. While the BOLD signal is a valid measure of neuronal activity, variance in fluctuations of the BOLD signal are not only due to fluctuations in neural activity. Thus, a remaining problem in neuroimaging analyses is developing methods that ensure specific inferences about neural activity that are not confounded by unrelated sources of noise in the BOLD signal. Here, we develop and test a new algorithm for performing semi-blind (i.e., no knowledge of stimulus timings) deconvolution of the BOLD signal that treats the neural event as an observable, but intermediate, probabilistic representation of the system’s state. We test and compare this new algorithm against three other recent deconvolution algorithms under varied levels of autocorrelated and Gaussian noise, hemodynamic response function (HRF) misspecification, and observation sampling rate (i.e., TR). Further, we compare the algorithms’ performance using two models to simulate BOLD data: a convolution of neural events with a known (or misspecified) HRF versus a biophysically accurate balloon model of hemodynamics. We also examine the algorithms’ performance on real task data. The results demonstrated good performance of all algorithms, though the new algorithm generally outperformed the others (3.0% improvement) under simulated resting state experimental conditions exhibiting multiple, realistic confounding factors (as well as 10.3% improvement on a real Stroop task). The simulations also demonstrate that the greatest negative influence on deconvolution accuracy is observation sampling rate. Practical and theoretical implications of these results for improving inferences about neural activity from fMRI BOLD signal are discussed. PMID:23602664
NASA Astrophysics Data System (ADS)
Martin, Jacob A.; Gross, Kevin C.
2016-05-01
As off-nadir viewing platforms become increasingly prevalent in remote sensing, material identification techniques must be robust to changing viewing geometries. Current identification strategies generally rely on estimating reflectivity or emissivity, both of which vary with viewing angle. Presented here is a technique, leveraging polarimetric and hyperspectral imaging (P-HSI), to estimate index of refraction which is invariant to viewing geometry. Results from a quartz window show that index of refraction can be retrieved to within 0.08 rms error from 875-1250 cm-1 for an amorphous material. Results from a silicon carbide (SiC) wafer, which has much sharper features than quartz glass, show the index of refraction can be retrieved to within 0.07 rms error. The results from each of these datasets show an improvement when compared with a maximum smoothness TES algorithm.
NASA Astrophysics Data System (ADS)
Herrera, Kathleen Kate
In recent years, laser-induced breakdown spectroscopy (LIBS) has become an increasingly popular technique for many diverse applications. This is mainly due to its numerous attractive features including minimal to no sample preparation, minimal sample invasiveness, sample versatility, remote detection capability and simultaneous multi-elemental capability. However, most of LIBS applications are limited to semi-quantitative or relative analysis due to the difficulty in finding matrix-matched standards or a constant reference component in the system for calibration purposes. Therefore, methods which do not require the use of reference standards, hence, standard-free, are highly desired. In this research, a general LIBS system was constructed, calibrated and optimized. The corresponding instrumental function and relative spectral efficiency of the detection system were also investigated. In addition, development of a spectral acquisition method was necessary so that data in the wide spectral range from 220 to 700 nm may be obtained using a non-echelle detection system. This requires multiple acquisitions of successive spectral windows and splicing the windows together with optimum overlap using an in-house program written in Q-basic. Two existing standard-free approaches, the calibration-free LIBS (CF-LIBS) technique and the Monte Carlo simulated annealing optimization modeling algorithm for LIBS (MC-LIBS), were experimentally evaluated in this research. The CF-LIBS approach, which is based on the Boltzmann plot method, is used to directly evaluate the plasma temperature, electron number density and relative concentrations of species present in a given sample without the need for reference standards. In the second approach, the initial value problem is solved based on the model of a radiative plasma expanding into vacuum. Here, the prediction of the initial plasma conditions (i.e., temperature and elemental number densities) is achieved by a step-wise Monte Carlo
Modified OMP Algorithm for Exponentially Decaying Signals
Kazimierczuk, Krzysztof; Kasprzak, Paweł
2015-01-01
A group of signal reconstruction methods, referred to as compressed sensing (CS), has recently found a variety of applications in numerous branches of science and technology. However, the condition of the applicability of standard CS algorithms (e.g., orthogonal matching pursuit, OMP), i.e., the existence of the strictly sparse representation of a signal, is rarely met. Thus, dedicated algorithms for solving particular problems have to be developed. In this paper, we introduce a modification of OMP motivated by nuclear magnetic resonance (NMR) application of CS. The algorithm is based on the fact that the NMR spectrum consists of Lorentzian peaks and matches a single Lorentzian peak in each of its iterations. Thus, we propose the name Lorentzian peak matching pursuit (LPMP). We also consider certain modification of the algorithm by introducing the allowed positions of the Lorentzian peaks' centers. Our results show that the LPMP algorithm outperforms other CS algorithms when applied to exponentially decaying signals. PMID:25609044
Extortion can outperform generosity in the iterated prisoner's dilemma.
Wang, Zhijian; Zhou, Yanran; Lien, Jaimie W; Zheng, Jie; Xu, Bin
2016-01-01
Zero-determinant (ZD) strategies, as discovered by Press and Dyson, can enforce a linear relationship between a pair of players' scores in the iterated prisoner's dilemma. Particularly, the extortionate ZD strategies can enforce and exploit cooperation, providing a player with a score advantage, and consequently higher scores than those from either mutual cooperation or generous ZD strategies. In laboratory experiments in which human subjects were paired with computer co-players, we demonstrate that both the generous and the extortionate ZD strategies indeed enforce a unilateral control of the reward. When the experimental setting is sufficiently long and the computerized nature of the opponent is known to human subjects, the extortionate strategy outperforms the generous strategy. Human subjects' cooperation rates when playing against extortionate and generous ZD strategies are similar after learning has occurred. More than half of extortionate strategists finally obtain an average score higher than that from mutual cooperation. PMID:27067513
Dodecylresorufin (C12R) Outperforms Resorufin in Microdroplet Bacterial Assays.
Scheler, Ott; Kaminski, Tomasz S; Ruszczak, Artur; Garstecki, Piotr
2016-05-11
This paper proves that dodecylresorufin (C12R) outperforms resorufin (the conventional form of this dye) in droplet microfluidic bacterial assays. Resorufin is a marker dye that is widely used in different fields of microbiology and has increasingly been applied in droplet microfluidic assays and experiments. The main concern associated with resorufin in droplet-based systems is dye leakage into the oil phase and neighboring droplets. The leakage decreases the performance of assays because it causes averaging of the signal between the positive (bacteria-containing) and negative (empty) droplets. Here we show that C12R is a promising alternative to conventional resorufin because it maintains higher sensitivity, specificity, and signal-to-noise ratio over time. These characteristics make C12R a suitable reagent for droplet digital assays and for monitoring of microbial growth in droplets. PMID:27100211
Better than Nature: Nicotinamide Biomimetics That Outperform Natural Coenzymes.
Knaus, Tanja; Paul, Caroline E; Levy, Colin W; de Vries, Simon; Mutti, Francesco G; Hollmann, Frank; Scrutton, Nigel S
2016-01-27
The search for affordable, green biocatalytic processes is a challenge for chemicals manufacture. Redox biotransformations are potentially attractive, but they rely on unstable and expensive nicotinamide coenzymes that have prevented their widespread exploitation. Stoichiometric use of natural coenzymes is not viable economically, and the instability of these molecules hinders catalytic processes that employ coenzyme recycling. Here, we investigate the efficiency of man-made synthetic biomimetics of the natural coenzymes NAD(P)H in redox biocatalysis. Extensive studies with a range of oxidoreductases belonging to the "ene" reductase family show that these biomimetics are excellent analogues of the natural coenzymes, revealed also in crystal structures of the ene reductase XenA with selected biomimetics. In selected cases, these biomimetics outperform the natural coenzymes. "Better-than-Nature" biomimetics should find widespread application in fine and specialty chemicals production by harnessing the power of high stereo-, regio-, and chemoselective redox biocatalysts and enabling reactions under mild conditions at low cost. PMID:26727612
Extortion can outperform generosity in the iterated prisoner's dilemma
Wang, Zhijian; Zhou, Yanran; Lien, Jaimie W.; Zheng, Jie; Xu, Bin
2016-01-01
Zero-determinant (ZD) strategies, as discovered by Press and Dyson, can enforce a linear relationship between a pair of players' scores in the iterated prisoner's dilemma. Particularly, the extortionate ZD strategies can enforce and exploit cooperation, providing a player with a score advantage, and consequently higher scores than those from either mutual cooperation or generous ZD strategies. In laboratory experiments in which human subjects were paired with computer co-players, we demonstrate that both the generous and the extortionate ZD strategies indeed enforce a unilateral control of the reward. When the experimental setting is sufficiently long and the computerized nature of the opponent is known to human subjects, the extortionate strategy outperforms the generous strategy. Human subjects' cooperation rates when playing against extortionate and generous ZD strategies are similar after learning has occurred. More than half of extortionate strategists finally obtain an average score higher than that from mutual cooperation. PMID:27067513
Lewinski, Peter
2015-01-01
Little is known about people’s accuracy of recognizing neutral faces as neutral. In this paper, I demonstrate the importance of knowing how well people recognize neutral faces. I contrasted human recognition scores of 100 typical, neutral front-up facial images with scores of an arguably objective judge – automated facial coding (AFC) software. I hypothesized that the software would outperform humans in recognizing neutral faces because of the inherently objective nature of computer algorithms. Results confirmed this hypothesis. I provided the first-ever evidence that computer software (90%) was more accurate in recognizing neutral faces than people were (59%). I posited two theoretical mechanisms, i.e., smile-as-a-baseline and false recognition of emotion, as possible explanations for my findings. PMID:26441761
Electricity Load Forecasting Using Support Vector Regression with Memetic Algorithms
Hu, Zhongyi; Xiong, Tao
2013-01-01
Electricity load forecasting is an important issue that is widely explored and examined in power systems operation literature and commercial transactions in electricity markets literature as well. Among the existing forecasting models, support vector regression (SVR) has gained much attention. Considering the performance of SVR highly depends on its parameters; this study proposed a firefly algorithm (FA) based memetic algorithm (FA-MA) to appropriately determine the parameters of SVR forecasting model. In the proposed FA-MA algorithm, the FA algorithm is applied to explore the solution space, and the pattern search is used to conduct individual learning and thus enhance the exploitation of FA. Experimental results confirm that the proposed FA-MA based SVR model can not only yield more accurate forecasting results than the other four evolutionary algorithms based SVR models and three well-known forecasting models but also outperform the hybrid algorithms in the related existing literature. PMID:24459425
Schall, Marina; Martiny, Sarah E; Goetz, Thomas; Hall, Nathan C
2016-05-01
Although expressing positive emotions is typically socially rewarded, in the present work, we predicted that people suppress positive emotions and thereby experience social benefits when outperformed others are present. We tested our predictions in three experimental studies with high school students. In Studies 1 and 2, we manipulated the type of social situation (outperformance vs. non-outperformance) and assessed suppression of positive emotions. In both studies, individuals reported suppressing positive emotions more in outperformance situations than in non-outperformance situations. In Study 3, we manipulated the social situation (outperformance vs. non-outperformance) as well as the videotaped person's expression of positive emotions (suppression vs. expression). The findings showed that when outperforming others, individuals were indeed evaluated more positively when they suppressed rather than expressed their positive emotions, and demonstrate the importance of the specific social situation with respect to the effects of suppression. PMID:27029576
Better than Nature: Nicotinamide Biomimetics That Outperform Natural Coenzymes
2016-01-01
The search for affordable, green biocatalytic processes is a challenge for chemicals manufacture. Redox biotransformations are potentially attractive, but they rely on unstable and expensive nicotinamide coenzymes that have prevented their widespread exploitation. Stoichiometric use of natural coenzymes is not viable economically, and the instability of these molecules hinders catalytic processes that employ coenzyme recycling. Here, we investigate the efficiency of man-made synthetic biomimetics of the natural coenzymes NAD(P)H in redox biocatalysis. Extensive studies with a range of oxidoreductases belonging to the “ene” reductase family show that these biomimetics are excellent analogues of the natural coenzymes, revealed also in crystal structures of the ene reductase XenA with selected biomimetics. In selected cases, these biomimetics outperform the natural coenzymes. “Better-than-Nature” biomimetics should find widespread application in fine and specialty chemicals production by harnessing the power of high stereo-, regio-, and chemoselective redox biocatalysts and enabling reactions under mild conditions at low cost. PMID:26727612
Adult vultures outperform juveniles in challenging thermal soaring conditions
Harel, Roi; Horvitz, Nir; Nathan, Ran
2016-01-01
Due to the potentially detrimental consequences of low performance in basic functional tasks, individuals are expected to improve performance with age and show the most marked changes during early stages of life. Soaring-gliding birds use rising-air columns (thermals) to reduce energy expenditure allocated to flight. We offer a framework to evaluate thermal soaring performance, and use GPS-tracking to study movements of Eurasian griffon vultures (Gyps fulvus). Because the location and intensity of thermals are variable, we hypothesized that soaring performance would improve with experience and predicted that the performance of inexperienced individuals (<2 months) would be inferior to that of experienced ones (>5 years). No differences were found in body characteristics, climb rates under low wind shear, and thermal selection, presumably due to vultures’ tendency to forage in mixed-age groups. Adults, however, outperformed juveniles in their ability to adjust fine-scale movements under challenging conditions, as juveniles had lower climb rates under intermediate wind shear, particularly on the lee-side of thermal columns. Juveniles were also less efficient along the route both in terms of time and energy. The consequences of these handicaps are probably exacerbated if juveniles lag behind adults in finding and approaching food. PMID:27291590
Adult vultures outperform juveniles in challenging thermal soaring conditions.
Harel, Roi; Horvitz, Nir; Nathan, Ran
2016-01-01
Due to the potentially detrimental consequences of low performance in basic functional tasks, individuals are expected to improve performance with age and show the most marked changes during early stages of life. Soaring-gliding birds use rising-air columns (thermals) to reduce energy expenditure allocated to flight. We offer a framework to evaluate thermal soaring performance, and use GPS-tracking to study movements of Eurasian griffon vultures (Gyps fulvus). Because the location and intensity of thermals are variable, we hypothesized that soaring performance would improve with experience and predicted that the performance of inexperienced individuals (<2 months) would be inferior to that of experienced ones (>5 years). No differences were found in body characteristics, climb rates under low wind shear, and thermal selection, presumably due to vultures' tendency to forage in mixed-age groups. Adults, however, outperformed juveniles in their ability to adjust fine-scale movements under challenging conditions, as juveniles had lower climb rates under intermediate wind shear, particularly on the lee-side of thermal columns. Juveniles were also less efficient along the route both in terms of time and energy. The consequences of these handicaps are probably exacerbated if juveniles lag behind adults in finding and approaching food. PMID:27291590
Digital image analysis outperforms manual biomarker assessment in breast cancer.
Stålhammar, Gustav; Fuentes Martinez, Nelson; Lippert, Michael; Tobin, Nicholas P; Mølholm, Ida; Kis, Lorand; Rosin, Gustaf; Rantalainen, Mattias; Pedersen, Lars; Bergh, Jonas; Grunkin, Michael; Hartman, Johan
2016-04-01
In the spectrum of breast cancers, categorization according to the four gene expression-based subtypes 'Luminal A,' 'Luminal B,' 'HER2-enriched,' and 'Basal-like' is the method of choice for prognostic and predictive value. As gene expression assays are not yet universally available, routine immunohistochemical stains act as surrogate markers for these subtypes. Thus, congruence of surrogate markers and gene expression tests is of utmost importance. In this study, 3 cohorts of primary breast cancer specimens (total n=436) with up to 28 years of survival data were scored for Ki67, ER, PR, and HER2 status manually and by digital image analysis (DIA). The results were then compared for sensitivity and specificity for the Luminal B subtype, concordance to PAM50 assays in subtype classification and prognostic power. The DIA system used was the Visiopharm Integrator System. DIA outperformed manual scoring in terms of sensitivity and specificity for the Luminal B subtype, widely considered the most challenging distinction in surrogate subclassification, and produced slightly better concordance and Cohen's κ agreement with PAM50 gene expression assays. Manual biomarker scores and DIA essentially matched each other for Cox regression hazard ratios for all-cause mortality. When the Nottingham combined histologic grade (Elston-Ellis) was used as a prognostic surrogate, stronger Spearman's rank-order correlations were produced by DIA. Prognostic value of Ki67 scores in terms of likelihood ratio χ(2) (LR χ(2)) was higher for DIA that also added significantly more prognostic information to the manual scores (LR-Δχ(2)). In conclusion, the system for DIA evaluated here was in most aspects a superior alternative to manual biomarker scoring. It also has the potential to reduce time consumption for pathologists, as many of the steps in the workflow are either automatic or feasible to manage without pathological expertise. PMID:26916072
Duan, Qian-Qian; Yang, Gen-Ke; Pan, Chang-Chun
2014-01-01
A hybrid optimization algorithm combining finite state method (FSM) and genetic algorithm (GA) is proposed to solve the crude oil scheduling problem. The FSM and GA are combined to take the advantage of each method and compensate deficiencies of individual methods. In the proposed algorithm, the finite state method makes up for the weakness of GA which is poor at local searching ability. The heuristic returned by the FSM can guide the GA algorithm towards good solutions. The idea behind this is that we can generate promising substructure or partial solution by using FSM. Furthermore, the FSM can guarantee that the entire solution space is uniformly covered. Therefore, the combination of the two algorithms has better global performance than the existing GA or FSM which is operated individually. Finally, a real-life crude oil scheduling problem from the literature is used for conducting simulation. The experimental results validate that the proposed method outperforms the state-of-art GA method. PMID:24772031
Efficient sequential and parallel algorithms for record linkage
Mamun, Abdullah-Al; Mi, Tian; Aseltine, Robert; Rajasekaran, Sanguthevar
2014-01-01
Background and objective Integrating data from multiple sources is a crucial and challenging problem. Even though there exist numerous algorithms for record linkage or deduplication, they suffer from either large time needs or restrictions on the number of datasets that they can integrate. In this paper we report efficient sequential and parallel algorithms for record linkage which handle any number of datasets and outperform previous algorithms. Methods Our algorithms employ hierarchical clustering algorithms as the basis. A key idea that we use is radix sorting on certain attributes to eliminate identical records before any further processing. Another novel idea is to form a graph that links similar records and find the connected components. Results Our sequential and parallel algorithms have been tested on a real dataset of 1 083 878 records and synthetic datasets ranging in size from 50 000 to 9 000 000 records. Our sequential algorithm runs at least two times faster, for any dataset, than the previous best-known algorithm, the two-phase algorithm using faster computation of the edit distance (TPA (FCED)). The speedups obtained by our parallel algorithm are almost linear. For example, we get a speedup of 7.5 with 8 cores (residing in a single node), 14.1 with 16 cores (residing in two nodes), and 26.4 with 32 cores (residing in four nodes). Conclusions We have compared the performance of our sequential algorithm with TPA (FCED) and found that our algorithm outperforms the previous one. The accuracy is the same as that of this previous best-known algorithm. PMID:24154837
A novel iris segmentation algorithm based on small eigenvalue analysis
NASA Astrophysics Data System (ADS)
Harish, B. S.; Aruna Kumar, S. V.; Guru, D. S.; Ngo, Minh Ngoc
2015-12-01
In this paper, a simple and robust algorithm is proposed for iris segmentation. The proposed method consists of two steps. In first step, iris and pupil is segmented using Robust Spatial Kernel FCM (RSKFCM) algorithm. RSKFCM is based on traditional Fuzzy-c-Means (FCM) algorithm, which incorporates spatial information and uses kernel metric as distance measure. In second step, small eigenvalue transformation is applied to localize iris boundary. The transformation is based on statistical and geometrical properties of the small eigenvalue of the covariance matrix of a set of edge pixels. Extensive experimentations are carried out on standard benchmark iris dataset (viz. CASIA-IrisV4 and UBIRIS.v2). We compared our proposed method with existing iris segmentation methods. Our proposed method has the least time complexity of O(n(i+p)) . The result of the experiments emphasizes that the proposed algorithm outperforms the existing iris segmentation methods.
Do new wipe materials outperform traditional lead dust cleaning methods?
Lewis, Roger D; Ong, Kee Hean; Emo, Brett; Kennedy, Jason; Brown, Christopher A; Condoor, Sridhar; Thummalakunta, Laxmi
2012-01-01
Government guidelines have traditionally recommended the use of wet mopping, sponging, or vacuuming for removal of lead-contaminated dust from hard surfaces in homes. The emergence of new technologies, such as the electrostatic dry cloth and wet disposable clothes used on mopheads, for removal of dust provides an opportunity to evaluate their ability to remove lead compared with more established methods. The purpose of this study was to determine if relative differences exist between two new and two older methods for removal of lead-contaminated dust (LCD) from three wood surfaces that were characterized by different roughness or texture. Standard leaded dust, <75 μm, was deposited by gravity onto the wood specimens. Specimens were cleaned using an automated device. Electrostatic dry cloths (dry Swiffer), wet Swiffer cloths, paper shop towels with non-ionic detergent, and vacuuming were used for cleaning LCD from the specimens. Lead analysis was by anodic stripping voltammetry. After the cleaning study was conducted, a study of the coefficient of friction was performed for each wipe material. Analysis of variance was used to evaluate the surface and cleaning methods. There were significant interactions between cleaning method and surface types, p = 0.007. Cleaning method was found be a significant factor in removal of lead, p <0.001, indicating that effectiveness of each cleaning methods is different. However, cleaning was not affected by types of surfaces. The coefficient of friction, significantly different among the three wipes, is likely to influence the cleaning action. Cleaning method appears to be more important than texture in LCD removal from hard surfaces. There are some small but important factors in cleaning LCD from hard surfaces, including the limits of a Swiffer mop to conform to curved surfaces and the efficiency of the wetted shop towel and vacuuming for cleaning all surface textures. The mean percentage reduction in lead dust achieved by the
Demeyer, Sofie; Michoel, Tom; Fostier, Jan; Audenaert, Pieter; Pickavet, Mario; Demeester, Piet
2013-01-01
Subgraph matching algorithms are designed to find all instances of predefined subgraphs in a large graph or network and play an important role in the discovery and analysis of so-called network motifs, subgraph patterns which occur more often than expected by chance. We present the index-based subgraph matching algorithm (ISMA), a novel tree-based algorithm. ISMA realizes a speedup compared to existing algorithms by carefully selecting the order in which the nodes of a query subgraph are investigated. In order to achieve this, we developed a number of data structures and maximally exploited symmetry characteristics of the subgraph. We compared ISMA to a naive recursive tree-based algorithm and to a number of well-known subgraph matching algorithms. Our algorithm outperforms the other algorithms, especially on large networks and with large query subgraphs. An implementation of ISMA in Java is freely available at http://sourceforge.net/projects/isma/. PMID:23620730
Demeyer, Sofie; Michoel, Tom; Fostier, Jan; Audenaert, Pieter; Pickavet, Mario; Demeester, Piet
2013-01-01
Subgraph matching algorithms are designed to find all instances of predefined subgraphs in a large graph or network and play an important role in the discovery and analysis of so-called network motifs, subgraph patterns which occur more often than expected by chance. We present the index-based subgraph matching algorithm (ISMA), a novel tree-based algorithm. ISMA realizes a speedup compared to existing algorithms by carefully selecting the order in which the nodes of a query subgraph are investigated. In order to achieve this, we developed a number of data structures and maximally exploited symmetry characteristics of the subgraph. We compared ISMA to a naive recursive tree-based algorithm and to a number of well-known subgraph matching algorithms. Our algorithm outperforms the other algorithms, especially on large networks and with large query subgraphs. An implementation of ISMA in Java is freely available at http://sourceforge.net/projects/isma/. PMID:23620730
Adaptive image contrast enhancement algorithm for point-based rendering
NASA Astrophysics Data System (ADS)
Xu, Shaoping; Liu, Xiaoping P.
2015-03-01
Surgical simulation is a major application in computer graphics and virtual reality, and most of the existing work indicates that interactive real-time cutting simulation of soft tissue is a fundamental but challenging research problem in virtual surgery simulation systems. More specifically, it is difficult to achieve a fast enough graphic update rate (at least 30 Hz) on commodity PC hardware by utilizing traditional triangle-based rendering algorithms. In recent years, point-based rendering (PBR) has been shown to offer the potential to outperform the traditional triangle-based rendering in speed when it is applied to highly complex soft tissue cutting models. Nevertheless, the PBR algorithms are still limited in visual quality due to inherent contrast distortion. We propose an adaptive image contrast enhancement algorithm as a postprocessing module for PBR, providing high visual rendering quality as well as acceptable rendering efficiency. Our approach is based on a perceptible image quality technique with automatic parameter selection, resulting in a visual quality comparable to existing conventional PBR algorithms. Experimental results show that our adaptive image contrast enhancement algorithm produces encouraging results both visually and numerically compared to representative algorithms, and experiments conducted on the latest hardware demonstrate that the proposed PBR framework with the postprocessing module is superior to the conventional PBR algorithm and that the proposed contrast enhancement algorithm can be utilized in (or compatible with) various variants of the conventional PBR algorithm.
Landry, Brian R. Subotnik, Joseph E.
2015-03-14
We evaluate the accuracy of Tully’s surface hopping algorithm for the spin-boson model in the limit of small to moderate reorganization energy. We calculate transition rates between diabatic surfaces in the exciton basis and compare against exact results from the hierarchical equations of motion; we also compare against approximate rates from the secular Redfield equation and Ehrenfest dynamics. We show that decoherence-corrected surface hopping performs very well in this regime, agreeing with secular Redfield theory for very weak system-bath coupling and outperforming secular Redfield theory for moderate system-bath coupling. Surface hopping can also be extended beyond the Markovian limits of standard Redfield theory. Given previous work [B. R. Landry and J. E. Subotnik, J. Chem. Phys. 137, 22A513 (2012)] that establishes the accuracy of decoherence-corrected surface-hopping in the Marcus regime, this work suggests that surface hopping may well have a very wide range of applicability.
An Improved Physarum polycephalum Algorithm for the Shortest Path Problem
Wang, Qing; Adamatzky, Andrew; Chan, Felix T. S.; Mahadevan, Sankaran
2014-01-01
Shortest path is among classical problems of computer science. The problems are solved by hundreds of algorithms, silicon computing architectures and novel substrate, unconventional, computing devices. Acellular slime mould P. polycephalum is originally famous as a computing biological substrate due to its alleged ability to approximate shortest path from its inoculation site to a source of nutrients. Several algorithms were designed based on properties of the slime mould. Many of the Physarum-inspired algorithms suffer from a low converge speed. To accelerate the search of a solution and reduce a number of iterations we combined an original model of Physarum-inspired path solver with a new a parameter, called energy. We undertook a series of computational experiments on approximating shortest paths in networks with different topologies, and number of nodes varying from 15 to 2000. We found that the improved Physarum algorithm matches well with existing Physarum-inspired approaches yet outperforms them in number of iterations executed and a total running time. We also compare our algorithm with other existing algorithms, including the ant colony optimization algorithm and Dijkstra algorithm. PMID:24982960
A Hybrid Evolutionary Algorithm for Wheat Blending Problem
Bonyadi, Mohammad Reza; Michalewicz, Zbigniew; Barone, Luigi
2014-01-01
This paper presents a hybrid evolutionary algorithm to deal with the wheat blending problem. The unique constraints of this problem make many existing algorithms fail: either they do not generate acceptable results or they are not able to complete optimization within the required time. The proposed algorithm starts with a filtering process that follows predefined rules to reduce the search space. Then the linear-relaxed version of the problem is solved using a standard linear programming algorithm. The result is used in conjunction with a solution generated by a heuristic method to generate an initial solution. After that, a hybrid of an evolutionary algorithm, a heuristic method, and a linear programming solver is used to improve the quality of the solution. A local search based posttuning method is also incorporated into the algorithm. The proposed algorithm has been tested on artificial test cases and also real data from past years. Results show that the algorithm is able to find quality results in all cases and outperforms the existing method in terms of both quality and speed. PMID:24707222
An improved Physarum polycephalum algorithm for the shortest path problem.
Zhang, Xiaoge; Wang, Qing; Adamatzky, Andrew; Chan, Felix T S; Mahadevan, Sankaran; Deng, Yong
2014-01-01
Shortest path is among classical problems of computer science. The problems are solved by hundreds of algorithms, silicon computing architectures and novel substrate, unconventional, computing devices. Acellular slime mould P. polycephalum is originally famous as a computing biological substrate due to its alleged ability to approximate shortest path from its inoculation site to a source of nutrients. Several algorithms were designed based on properties of the slime mould. Many of the Physarum-inspired algorithms suffer from a low converge speed. To accelerate the search of a solution and reduce a number of iterations we combined an original model of Physarum-inspired path solver with a new a parameter, called energy. We undertook a series of computational experiments on approximating shortest paths in networks with different topologies, and number of nodes varying from 15 to 2000. We found that the improved Physarum algorithm matches well with existing Physarum-inspired approaches yet outperforms them in number of iterations executed and a total running time. We also compare our algorithm with other existing algorithms, including the ant colony optimization algorithm and Dijkstra algorithm. PMID:24982960
A swarm intelligence based memetic algorithm for task allocation in distributed systems
NASA Astrophysics Data System (ADS)
Sarvizadeh, Raheleh; Haghi Kashani, Mostafa
2011-12-01
This paper proposes a Swarm Intelligence based Memetic algorithm for Task Allocation and scheduling in distributed systems. The tasks scheduling in distributed systems is known as an NP-complete problem. Hence, many genetic algorithms have been proposed for searching optimal solutions from entire solution space. However, these existing approaches are going to scan the entire solution space without considering the techniques that can reduce the complexity of the optimization. Spending too much time for doing scheduling is considered the main shortcoming of these approaches. Therefore, in this paper memetic algorithm has been used to cope with this shortcoming. With regard to load balancing efficiently, Bee Colony Optimization (BCO) has been applied as local search in the proposed memetic algorithm. Extended experimental results demonstrated that the proposed method outperformed the existing GA-based method in terms of CPU utilization.
A swarm intelligence based memetic algorithm for task allocation in distributed systems
NASA Astrophysics Data System (ADS)
Sarvizadeh, Raheleh; Haghi Kashani, Mostafa
2012-01-01
This paper proposes a Swarm Intelligence based Memetic algorithm for Task Allocation and scheduling in distributed systems. The tasks scheduling in distributed systems is known as an NP-complete problem. Hence, many genetic algorithms have been proposed for searching optimal solutions from entire solution space. However, these existing approaches are going to scan the entire solution space without considering the techniques that can reduce the complexity of the optimization. Spending too much time for doing scheduling is considered the main shortcoming of these approaches. Therefore, in this paper memetic algorithm has been used to cope with this shortcoming. With regard to load balancing efficiently, Bee Colony Optimization (BCO) has been applied as local search in the proposed memetic algorithm. Extended experimental results demonstrated that the proposed method outperformed the existing GA-based method in terms of CPU utilization.
ERIC Educational Resources Information Center
Zhao, Dacheng; Singh, Michael
2011-01-01
International comparative studies and cross-cultural studies of mathematics achievement indicate that Chinese students (whether living in or outside China) consistently outperform their Western counterparts. This study shows that the gap between Chinese-Australian and other Australian students is best explained by differences in motivation to…
LAHS: A novel harmony search algorithm based on learning automata
NASA Astrophysics Data System (ADS)
Enayatifar, Rasul; Yousefi, Moslem; Abdullah, Abdul Hanan; Darus, Amer Nordin
2013-12-01
This study presents a learning automata-based harmony search (LAHS) for unconstrained optimization of continuous problems. The harmony search (HS) algorithm performance strongly depends on the fine tuning of its parameters, including the harmony consideration rate (HMCR), pitch adjustment rate (PAR) and bandwidth (bw). Inspired by the spur-in-time responses in the musical improvisation process, learning capabilities are employed in the HS to select these parameters based on spontaneous reactions. An extensive numerical investigation is conducted on several well-known test functions, and the results are compared with the HS algorithm and its prominent variants, including the improved harmony search (IHS), global-best harmony search (GHS) and self-adaptive global-best harmony search (SGHS). The numerical results indicate that the LAHS is more efficient in finding optimum solutions and outperforms the existing HS algorithm variants.
NASA Astrophysics Data System (ADS)
Biazzo, Indaco; Braunstein, Alfredo; Zecchina, Riccardo
2012-08-01
We study the behavior of an algorithm derived from the cavity method for the prize-collecting steiner tree (PCST) problem on graphs. The algorithm is based on the zero temperature limit of the cavity equations and as such is formally simple (a fixed point equation resolved by iteration) and distributed (parallelizable). We provide a detailed comparison with state-of-the-art algorithms on a wide range of existing benchmarks, networks, and random graphs. Specifically, we consider an enhanced derivative of the Goemans-Williamson heuristics and the dhea solver, a branch and cut integer linear programming based approach. The comparison shows that the cavity algorithm outperforms the two algorithms in most large instances both in running time and quality of the solution. Finally we prove a few optimality properties of the solutions provided by our algorithm, including optimality under the two postprocessing procedures defined in the Goemans-Williamson derivative and global optimality in some limit cases.
NASA Astrophysics Data System (ADS)
Gandomi, A. H.; Yang, X.-S.; Talatahari, S.; Alavi, A. H.
2013-01-01
A recently developed metaheuristic optimization algorithm, firefly algorithm (FA), mimics the social behavior of fireflies based on the flashing and attraction characteristics of fireflies. In the present study, we will introduce chaos into FA so as to increase its global search mobility for robust global optimization. Detailed studies are carried out on benchmark problems with different chaotic maps. Here, 12 different chaotic maps are utilized to tune the attractive movement of the fireflies in the algorithm. The results show that some chaotic FAs can clearly outperform the standard FA.
A novel bee swarm optimization algorithm for numerical function optimization
NASA Astrophysics Data System (ADS)
Akbari, Reza; Mohammadi, Alireza; Ziarati, Koorush
2010-10-01
The optimization algorithms which are inspired from intelligent behavior of honey bees are among the most recently introduced population based techniques. In this paper, a novel algorithm called bee swarm optimization, or BSO, and its two extensions for improving its performance are presented. The BSO is a population based optimization technique which is inspired from foraging behavior of honey bees. The proposed approach provides different patterns which are used by the bees to adjust their flying trajectories. As the first extension, the BSO algorithm introduces different approaches such as repulsion factor and penalizing fitness (RP) to mitigate the stagnation problem. Second, to maintain efficiently the balance between exploration and exploitation, time-varying weights (TVW) are introduced into the BSO algorithm. The proposed algorithm (BSO) and its two extensions (BSO-RP and BSO-RPTVW) are compared with existing algorithms which are based on intelligent behavior of honey bees, on a set of well known numerical test functions. The experimental results show that the BSO algorithms are effective and robust; produce excellent results, and outperform other algorithms investigated in this consideration.
Analyzing Enron Data: Bitmap Indexing Outperforms MySQL Queries bySeveral Orders of Magnitude
Stockinger, Kurt; Rotem, Doron; Shoshani, Arie; Wu, Kesheng
2006-01-28
FastBit is an efficient, compressed bitmap indexing technology that was developed in our group. In this report we evaluate the performance of MySQL and FastBit for analyzing the email traffic of the Enron dataset. The first finding shows that materializing the join results of several tables significantly improves the query performance. The second finding shows that FastBit outperforms MySQL by several orders of magnitude.
The ontogeny of human point following in dogs: When younger dogs outperform older.
Zaine, Isabela; Domeniconi, Camila; Wynne, Clive D L
2015-10-01
We investigated puppies' responsiveness to hand points differing in salience. Experiment 1 compared performance of younger (8 weeks old) and older (12 weeks) shelter pups in following pointing gestures. We hypothesized that older puppies would show better performance. Both groups followed the easy and moderate but not the difficult pointing cues. Surprisingly, the younger pups outperformed the older ones in following the moderate and difficult points. Investigation of subjects' backgrounds revealed that significantly more younger pups had experience living in human homes than did the older pups. Thus, we conducted a second experiment to isolate the variable experience. We collected additional data from older pet pups living in human homes on the same three point types and compared their performance with the shelter pups from Experiment 1. The pups living in homes accurately followed all three pointing cues. When comparing both experienced groups, the older pet pups outperformed the younger shelter ones, as predicted. When comparing the two same-age groups differing in background experience, the pups living in homes outperformed the shelter pups. A significant correlation between experience with humans and success in following less salient cues was found. The importance of ontogenetic learning in puppies' responsiveness to certain human social cues is discussed. PMID:26192336
Advanced GF(3^{2}) nonbinary LDPC coded modulation with non-uniform 9-QAM outperforming star 8-QAM.
Liu, Tao; Lin, Changyu; Djordjevic, Ivan B
2016-06-27
In this paper, we first describe a 9-symbol non-uniform signaling scheme based on Huffman code, in which different symbols are transmitted with different probabilities. By using the Huffman procedure, prefix code is designed to approach the optimal performance. Then, we introduce an algorithm to determine the optimal signal constellation sets for our proposed non-uniform scheme with the criterion of maximizing constellation figure of merit (CFM). The proposed nonuniform polarization multiplexed signaling 9-QAM scheme has the same spectral efficiency as the conventional 8-QAM. Additionally, we propose a specially designed GF(3^{2}) nonbinary quasi-cyclic LDPC code for the coded modulation system based on the 9-QAM non-uniform scheme. Further, we study the efficiency of our proposed non-uniform 9-QAM, combined with nonbinary LDPC coding, and demonstrate by Monte Carlo simulation that the proposed GF(2^{3}) nonbinary LDPC coded 9-QAM scheme outperforms nonbinary LDPC coded uniform 8-QAM by at least 0.8dB. PMID:27410549
PCA-LBG-based algorithms for VQ codebook generation
NASA Astrophysics Data System (ADS)
Tsai, Jinn-Tsong; Yang, Po-Yuan
2015-04-01
Vector quantisation (VQ) codebooks are generated by combining principal component analysis (PCA) algorithms with Linde-Buzo-Gray (LBG) algorithms. All training vectors are grouped according to the projected values of the principal components. The PCA-LBG-based algorithms include (1) PCA-LBG-Median, which selects the median vector of each group, (2) PCA-LBG-Centroid, which adopts the centroid vector of each group, and (3) PCA-LBG-Random, which randomly selects a vector of each group. The LBG algorithm finds a codebook based on the better vectors sent to an initial codebook by the PCA. The PCA performs an orthogonal transformation to convert a set of potentially correlated variables into a set of variables that are not linearly correlated. Because the orthogonal transformation efficiently distinguishes test image vectors, the proposed PCA-LBG-based algorithm is expected to outperform conventional algorithms in designing VQ codebooks. The experimental results confirm that the proposed PCA-LBG-based algorithms indeed obtain better results compared to existing methods reported in the literature.
Research on B Cell Algorithm for Learning to Rank Method Based on Parallel Strategy.
Tian, Yuling; Zhang, Hongxian
2016-01-01
For the purposes of information retrieval, users must find highly relevant documents from within a system (and often a quite large one comprised of many individual documents) based on input query. Ranking the documents according to their relevance within the system to meet user needs is a challenging endeavor, and a hot research topic-there already exist several rank-learning methods based on machine learning techniques which can generate ranking functions automatically. This paper proposes a parallel B cell algorithm, RankBCA, for rank learning which utilizes a clonal selection mechanism based on biological immunity. The novel algorithm is compared with traditional rank-learning algorithms through experimentation and shown to outperform the others in respect to accuracy, learning time, and convergence rate; taken together, the experimental results show that the proposed algorithm indeed effectively and rapidly identifies optimal ranking functions. PMID:27487242
Research on B Cell Algorithm for Learning to Rank Method Based on Parallel Strategy
Tian, Yuling; Zhang, Hongxian
2016-01-01
For the purposes of information retrieval, users must find highly relevant documents from within a system (and often a quite large one comprised of many individual documents) based on input query. Ranking the documents according to their relevance within the system to meet user needs is a challenging endeavor, and a hot research topic–there already exist several rank-learning methods based on machine learning techniques which can generate ranking functions automatically. This paper proposes a parallel B cell algorithm, RankBCA, for rank learning which utilizes a clonal selection mechanism based on biological immunity. The novel algorithm is compared with traditional rank-learning algorithms through experimentation and shown to outperform the others in respect to accuracy, learning time, and convergence rate; taken together, the experimental results show that the proposed algorithm indeed effectively and rapidly identifies optimal ranking functions. PMID:27487242
A new distributed systems scheduling algorithm: a swarm intelligence approach
NASA Astrophysics Data System (ADS)
Haghi Kashani, Mostafa; Sarvizadeh, Raheleh; Jameii, Mahdi
2011-12-01
The scheduling problem in distributed systems is known as an NP-complete problem, and methods based on heuristic or metaheuristic search have been proposed to obtain optimal and suboptimal solutions. The task scheduling is a key factor for distributed systems to gain better performance. In this paper, an efficient method based on memetic algorithm is developed to solve the problem of distributed systems scheduling. With regard to load balancing efficiently, Artificial Bee Colony (ABC) has been applied as local search in the proposed memetic algorithm. The proposed method has been compared to existing memetic-Based approach in which Learning Automata method has been used as local search. The results demonstrated that the proposed method outperform the above mentioned method in terms of communication cost.
Trait responses of invasive aquatic macrophyte congeners: colonizing diploid outperforms polyploid
Grewell, Brenda J.; Skaer Thomason, Meghan J.; Futrell, Caryn J.; Iannucci, Maria; Drenovsky, Rebecca E.
2016-01-01
Understanding traits underlying colonization and niche breadth of invasive plants is key to developing sustainable management solutions to curtail invasions at the establishment phase, when efforts are often most effective. The aim of this study was to evaluate how two invasive congeners differing in ploidy respond to high and lowresource availability following establishment from asexual fragments. Because polyploids are expected to have wider niche breadths than diploid ancestors, we predicted that a decaploid species would have superior ability to maximize resource uptake and use, and outperform a diploid congener when colonizing environments with contrasting light and nutrient availability. A mesocosm experiment was designed to test the main and interactive effects of ploidy (diploid and decaploid) and soil nutrient availability (low and high) nested within light environments (shade and sun) of two invasive aquatic plant congeners. Counter to our predictions, the diploid congener outperformed the decaploid in the early stage of growth. Although growth was similar and low in the cytotypes at low nutrient availability, the diploid species had much higher growth rate and biomass accumulation than the polyploid with nutrient enrichment, irrespective of light environment. Our results also revealed extreme differences in time to anthesis between the cytotypes. The rapid growth and earlier flowering of the diploid congener relative to the decaploid congener represent alternate strategies for establishment and success. PMID:26921139
Redersdorff, Sandrine; Martinot, Delphine
2009-06-01
The present study examines the effects of group status on self-esteem when individuals are outperformed by an in-group target (Experiments 1 and 2) or an out-group (Experiment 2). The main aim was to examine different self-protective mechanisms when the current standing of the in-group vis-à-vis another group is either unfavourable (low status) or favourable (high status). Experiment 1 showed that when outperformed by an in-group target, the members of a low status group reported higher self-esteem than members of a high status group. Moreover, this effect was mediated by group identification. Experiment 2 replicated the previous results and gave rise to similar effects on investment in the group. The perceived relevance of the comparison group appeared to protect the self-esteem of high status group members. This research demonstrates the mediating role of self-protection mechanisms such as group identification and the perceived relevance of a comparison group. PMID:18922208
Trait responses of invasive aquatic macrophyte congeners: colonizing diploid outperforms polyploid.
Grewell, Brenda J; Skaer Thomason, Meghan J; Futrell, Caryn J; Iannucci, Maria; Drenovsky, Rebecca E
2016-01-01
Understanding traits underlying colonization and niche breadth of invasive plants is key to developing sustainable management solutions to curtail invasions at the establishment phase, when efforts are often most effective. The aim of this study was to evaluate how two invasive congeners differing in ploidy respond to high and lowresource availability following establishment from asexual fragments. Because polyploids are expected to have wider niche breadths than diploid ancestors, we predicted that a decaploid species would have superior ability to maximize resource uptake and use, and outperform a diploid congener when colonizing environments with contrasting light and nutrient availability. A mesocosm experiment was designed to test the main and interactive effects of ploidy (diploid and decaploid) and soil nutrient availability (low and high) nested within light environments (shade and sun) of two invasive aquatic plant congeners. Counter to our predictions, the diploid congener outperformed the decaploid in the early stage of growth. Although growth was similar and low in the cytotypes at low nutrient availability, the diploid species had much higher growth rate and biomass accumulation than the polyploid with nutrient enrichment, irrespective of light environment. Our results also revealed extreme differences in time to anthesis between the cytotypes. The rapid growth and earlier flowering of the diploid congener relative to the decaploid congener represent alternate strategies for establishment and success. PMID:26921139
NASA Astrophysics Data System (ADS)
Zhao, Zhanlue
This dissertation consists of two parts. The first part deals with the performance appraisal of estimation algorithms. The second part focuses on the application of estimation algorithms to target tracking. Performance appraisal is crucial for understanding, developing and comparing various estimation algorithms. In particular, with the evolvement of estimation theory and the increase of problem complexity, performance appraisal is getting more and more challenging for engineers to make comprehensive conclusions. However, the existing theoretical results are inadequate for practical reference. The first part of this dissertation is dedicated to performance measures which include local performance measures, global performance measures and model distortion measure. The second part focuses on application of the recursive best linear unbiased estimation (BLUE) or linear minimum mean square error (LIB-M-ISE) estimation to nonlinear measurement problem in target tracking. Kalman filter has been the dominant basis for dynamic state filtering for several decades. Beyond Kalman filter, a more fundamental basis for the recursive best linear unbiased filtering has been thoroughly investigated in a series of papers by my advisor Dr. X. Rong Li. Based on the so-called quasi-recursive best linear unbiased filtering technique, the constraints of the Kalman filter Linear-Gaussian assumptions can be relaxed such that a general linear filtering technique for nonlinear systems can be achieved. An approximate optimal BLUE filter is implemented for nonlinear measurements in target tracking which outperforms the existing method significantly in terms of accuracy, credibility and robustness.
NASA Astrophysics Data System (ADS)
Yang, Cui-Li; Tang, Kit-Sang
2011-12-01
By considering the eigenratio of the Laplacian matrix as the synchronizability measure, this paper presents an efficient method to enhance the synchronizability of undirected and unweighted networks via rewiring. The rewiring method combines the use of tabu search and a local greedy algorithm so that an effective search of solutions can be achieved. As demonstrated in the simulation results, the performance of the proposed approach outperforms the existing methods for a large variety of initial networks, both in terms of speed and quality of solutions.
Low-Friction Minilaparoscopy Outperforms Regular 5-mm and 3-mm Instruments for Precise Tasks
Firme, Wood A.; Lima, Diego L.; de Paula Lopes, Vladmir Goldstein; Montandon, Isabelle D.; Filho, Flavio Santos; Shadduck, Phillip P.
2015-01-01
Background and Objectives: Therapeutic laparoscopy was incorporated into surgical practice more than 25 y ago. Several modifications have since been developed to further minimize surgical trauma and improve results. Minilaparoscopy, performed with 2- to 3-mm instruments was introduced in the mid 1990s but failed to attain mainstream use, mostly because of the limitations of the early devices. Buoyed by a renewed interest, new generations of mini instruments are being developed with improved functionality and durability. This study is an objective evaluation of a new set of mini instruments with a novel low-friction design. Method: Twenty-two medical students and 22 surgical residents served as study participants. Three designs of laparoscopic instruments were evaluated: conventional 5 mm, traditional 3 mm, and low-friction 3 mm. The instruments were evaluated with a standard surgical simulator, emulating 4 exercises of various complexities, testing grasping, precise 2-handed movements, and suturing. The metric measured was time to task completion, with 5 replicates for every combination of instrument–exercise–participant. Results: For all 4 tasks, the instrument design that performed the best was the same in both the medical student and surgical resident groups. For the gross-grasping task, the 5-mm conventional instruments performed best, followed by the low-friction mini instruments. For the 3 more complex and precise tasks, the low-friction mini instruments outperformed both of the other instrument designs. Conclusion: In standard surgical simulator exercises, low-friction minilaparoscopic instruments outperformed both conventional 3- and 5-mm laparoscopic instruments for precise tasks. PMID:26390530
Algorithms and Algorithmic Languages.
ERIC Educational Resources Information Center
Veselov, V. M.; Koprov, V. M.
This paper is intended as an introduction to a number of problems connected with the description of algorithms and algorithmic languages, particularly the syntaxes and semantics of algorithmic languages. The terms "letter, word, alphabet" are defined and described. The concept of the algorithm is defined and the relation between the algorithm and…
Ramyachitra, D.; Sofia, M.; Manikandan, P.
2015-01-01
Microarray technology allows simultaneous measurement of the expression levels of thousands of genes within a biological tissue sample. The fundamental power of microarrays lies within the ability to conduct parallel surveys of gene expression using microarray data. The classification of tissue samples based on gene expression data is an important problem in medical diagnosis of diseases such as cancer. In gene expression data, the number of genes is usually very high compared to the number of data samples. Thus the difficulty that lies with data are of high dimensionality and the sample size is small. This research work addresses the problem by classifying resultant dataset using the existing algorithms such as Support Vector Machine (SVM), K-nearest neighbor (KNN), Interval Valued Classification (IVC) and the improvised Interval Value based Particle Swarm Optimization (IVPSO) algorithm. Thus the results show that the IVPSO algorithm outperformed compared with other algorithms under several performance evaluation functions. PMID:26484222
Ramyachitra, D; Sofia, M; Manikandan, P
2015-09-01
Microarray technology allows simultaneous measurement of the expression levels of thousands of genes within a biological tissue sample. The fundamental power of microarrays lies within the ability to conduct parallel surveys of gene expression using microarray data. The classification of tissue samples based on gene expression data is an important problem in medical diagnosis of diseases such as cancer. In gene expression data, the number of genes is usually very high compared to the number of data samples. Thus the difficulty that lies with data are of high dimensionality and the sample size is small. This research work addresses the problem by classifying resultant dataset using the existing algorithms such as Support Vector Machine (SVM), K-nearest neighbor (KNN), Interval Valued Classification (IVC) and the improvised Interval Value based Particle Swarm Optimization (IVPSO) algorithm. Thus the results show that the IVPSO algorithm outperformed compared with other algorithms under several performance evaluation functions. PMID:26484222
Kuroda, Yoshihiro; Nisky, Ilana; Uranishi, Yuki; Imura, Masataka; Okamura, Allison M; Oshiro, Osamu
2013-01-01
We present a novel algorithm for real-time detection of the onset of surface electromyography signal in step-tracking wrist movements. The method identifies abrupt increase of the quasi-tension signal calculated from sEMG resulting from the step-by-step recruitment of activated motor units. We assessed the performance of our proposed algorithm using both simulated and real sEMG signals, and compared with two existing detection methods. Evaluation with simulated sEMG showed that the detection accuracy of our method is robust to different signal-to-noise ratios, and that it outperforms the existing methods in terms of bias when the noise is large (low SNR). Evaluation with real sEMG analysis also indicated better detection performance compared to existing methods. PMID:24110123
Multisensor data fusion algorithm development
Yocky, D.A.; Chadwick, M.D.; Goudy, S.P.; Johnson, D.K.
1995-12-01
This report presents a two-year LDRD research effort into multisensor data fusion. We approached the problem by addressing the available types of data, preprocessing that data, and developing fusion algorithms using that data. The report reflects these three distinct areas. First, the possible data sets for fusion are identified. Second, automated registration techniques for imagery data are analyzed. Third, two fusion techniques are presented. The first fusion algorithm is based on the two-dimensional discrete wavelet transform. Using test images, the wavelet algorithm is compared against intensity modulation and intensity-hue-saturation image fusion algorithms that are available in commercial software. The wavelet approach outperforms the other two fusion techniques by preserving spectral/spatial information more precisely. The wavelet fusion algorithm was also applied to Landsat Thematic Mapper and SPOT panchromatic imagery data. The second algorithm is based on a linear-regression technique. We analyzed the technique using the same Landsat and SPOT data.
NASA Astrophysics Data System (ADS)
Bhattacharyya, Jayanta; Bellucci, Joseph J.; Weitzhandler, Isaac; McDaniel, Jonathan R.; Spasojevic, Ivan; Li, Xinghai; Lin, Chao-Chieh; Chi, Jen-Tsan Ashley; Chilkoti, Ashutosh
2015-08-01
Packaging clinically relevant hydrophobic drugs into a self-assembled nanoparticle can improve their aqueous solubility, plasma half-life, tumour-specific uptake and therapeutic potential. To this end, here we conjugated paclitaxel (PTX) to recombinant chimeric polypeptides (CPs) that spontaneously self-assemble into ~60 nm near-monodisperse nanoparticles that increased the systemic exposure of PTX by sevenfold compared with free drug and twofold compared with the Food and Drug Administration-approved taxane nanoformulation (Abraxane). The tumour uptake of the CP-PTX nanoparticle was fivefold greater than free drug and twofold greater than Abraxane. In a murine cancer model of human triple-negative breast cancer and prostate cancer, CP-PTX induced near-complete tumour regression after a single dose in both tumour models, whereas at the same dose, no mice treated with Abraxane survived for >80 days (breast) and 60 days (prostate), respectively. These results show that a molecularly engineered nanoparticle with precisely engineered design features outperforms Abraxane, the current gold standard for PTX delivery.
Wolf, Max; Krause, Jens; Carney, Patricia A.; Bogart, Andy; Kurvers, Ralf H. J. M.
2015-01-01
While collective intelligence (CI) is a powerful approach to increase decision accuracy, few attempts have been made to unlock its potential in medical decision-making. Here we investigated the performance of three well-known collective intelligence rules (“majority”, “quorum”, and “weighted quorum”) when applied to mammography screening. For any particular mammogram, these rules aggregate the independent assessments of multiple radiologists into a single decision (recall the patient for additional workup or not). We found that, compared to single radiologists, any of these CI-rules both increases true positives (i.e., recalls of patients with cancer) and decreases false positives (i.e., recalls of patients without cancer), thereby overcoming one of the fundamental limitations to decision accuracy that individual radiologists face. Importantly, we find that all CI-rules systematically outperform even the best-performing individual radiologist in the respective group. Our findings demonstrate that CI can be employed to improve mammography screening; similarly, CI may have the potential to improve medical decision-making in a much wider range of contexts, including many areas of diagnostic imaging and, more generally, diagnostic decisions that are based on the subjective interpretation of evidence. PMID:26267331
Murciano Martínez, Patricia; Kabel, Mirjam A; Gruppen, Harry
2016-11-20
Enzyme hydrolysed (hemi-)celluloses from oil palm empty fruit bunches (EFBs) are a source for production of bio-fuels or chemicals. In this study, after either peracetic acid delignification or alkaline extraction, EFB hemicellulose structures were described, aided by xylanase hydrolysis. Delignification of EFB facilitated the hydrolysis of EFB-xylan by a pure endo-β-1,4-xylanase. Up to 91% (w/w) of the non-extracted xylan in the delignified EFB was hydrolysed compared to less than 4% (w/w) of that in untreated EFB. Alkaline extraction of EFB, without prior delignification, yielded only 50% of the xylan. The xylan obtained was hydrolysed only for 40% by the endo-xylanase used. Hence, delignification alone outperformed alkaline extraction as pretreatment for enzymatic fingerprinting of EFB xylans. From the analysis of the oligosaccharide-fingerprint of the delignified endo-xylanase hydrolysed EFB xylan, the structure was proposed as acetylated 4-O-methylglucuronoarabinoxylan. PMID:27561506
Bhattacharyya, Jayanta; Bellucci, Joseph J.; Weitzhandler, Isaac; McDaniel, Jonathan R.; Spasojevic, Ivan; Li, Xinghai; Lin, Chao-Chieh; Chi, Jen-Tsan Ashley; Chilkoti, Ashutosh
2015-01-01
Packaging clinically relevant hydrophobic drugs into a self-assembled nanoparticle can improve their aqueous solubility, plasma half-life, tumor specific uptake and therapeutic potential. To this end, here we conjugated paclitaxel (PTX) to recombinant chimeric polypeptides (CPs) that spontaneously self-assemble into ~60-nm diameter near-monodisperse nanoparticles that increased the systemic exposure of PTX by 7-fold compared to free drug and 2-fold compared to the FDA approved taxane nanoformulation (Abraxane®). The tumor uptake of the CP-PTX nanoparticle was 5-fold greater than free drug and 2-fold greater than Abraxane. In a murine cancer model of human triple negative breast cancer and prostate cancer, CP-PTX induced near complete tumor regression after a single dose in both tumor models, whereas at the same dose, no mice treated with Abraxane survived for more than 80 days (breast) and 60 days (prostate) respectively. These results show that a molecularly engineered nanoparticle with precisely engineered design features outperforms Abraxane, the current gold standard for paclitaxel delivery. PMID:26239362
Gender differences in primary and secondary education: Are girls really outperforming boys?
NASA Astrophysics Data System (ADS)
Driessen, Geert; van Langen, Annemarie
2013-06-01
A moral panic has broken out in several countries after recent studies showed that girls were outperforming boys in education. Commissioned by the Dutch Ministry of Education, the present study examines the position of boys and girls in Dutch primary education and in the first phase of secondary education over the past ten to fifteen years. On the basis of several national and international large-scale databases, the authors examined whether one can indeed speak of a gender gap, at the expense of boys. Three domains were investigated, namely cognitive competencies, non-cognitive competencies, and school career features. The results as expressed in effect sizes show that there are hardly any differences with regard to language and mathematics proficiency. However, the position of boys in terms of educational level and attitudes and behaviour is much more unfavourable than that of girls. Girls, on the other hand, score more unfavourably with regard to sector and subject choice. While the present situation in general does not differ very much from that of a decade ago, it is difficult to predict in what way the balances might shift in the years to come.
Plants adapted to warmer climate do not outperform regional plants during a natural heat wave.
Bucharova, Anna; Durka, Walter; Hermann, Julia-Maria; Hölzel, Norbert; Michalski, Stefan; Kollmann, Johannes; Bossdorf, Oliver
2016-06-01
With ongoing climate change, many plant species may not be able to adapt rapidly enough, and some conservation experts are therefore considering to translocate warm-adapted ecotypes to mitigate effects of climate warming. Although this strategy, called assisted migration, is intuitively plausible, most of the support comes from models, whereas experimental evidence is so far scarce. Here we present data on multiple ecotypes of six grassland species, which we grew in four common gardens in Germany during a natural heat wave, with temperatures 1.4-2.0°C higher than the long-term means. In each garden we compared the performance of regional ecotypes with plants from a locality with long-term summer temperatures similar to what the plants experienced during the summer heat wave. We found no difference in performance between regional and warm-adapted plants in four of the six species. In two species, regional ecotypes even outperformed warm-adapted plants, despite elevated temperatures, which suggests that translocating warm-adapted ecotypes may not only lack the desired effect of increased performance but may even have negative consequences. Even if adaptation to climate plays a role, other factors involved in local adaptation, such as biotic interactions, may override it. Based on our results, we cannot advocate assisted migration as a universal tool to enhance the performance of local plant populations and communities during climate change. PMID:27516871
Hargrove, Levi J; Lock, Blair A; Simon, Ann M
2013-01-01
Pattern recognition myoelectric control shows great promise as an alternative to conventional amplitude based control to control multiple degree of freedom prosthetic limbs. Many studies have reported pattern recognition classification error performances of less than 10% during offline tests; however, it remains unclear how this translates to real-time control performance. In this contribution, we compare the real-time control performances between pattern recognition and direct myoelectric control (a popular form of conventional amplitude control) for participants who had received targeted muscle reinnervation. The real-time performance was evaluated during three tasks; 1) a box and blocks task, 2) a clothespin relocation task, and 3) a block stacking task. Our results found that pattern recognition significantly outperformed direct control for all three performance tasks. Furthermore, it was found that pattern recognition was configured much quicker. The classification error of the pattern recognition systems used by the patients was found to be 16% ±(1.6%) suggesting that systems with this error rate may still provide excellent control. Finally, patients qualitatively preferred using pattern recognition control and reported the resulting control to be smoother and more consistent. PMID:24110008
Jan, Edward; Hendricks, Jeffrey L; Husaini, Vincent; Richardson-Burns, Sarah M; Sereno, Andrew; Martin, David C; Kotov, Nicholas A
2009-12-01
The safety, function, and longevity of implantable neuroprosthetic and cardiostimulating electrodes depend heavily on the electrical properties of the electrode-tissue interface, which in many cases requires substantial improvement. While different variations of carbon nanotube materials have been shown to be suitable for neural excitation, it is critical to evaluate them versus other materials used for bioelectrical interfacing, which have not been done in any study performed so far despite strong interest to this area. In this study, we carried out this evaluation and found that composite multiwalled carbon nanotube-polyelectrolyte (MWNT-PE) multilayer electrodes substantially outperform in one way or the other state-of-the-art neural interface materials available today, namely activated electrochemically deposited iridium oxide (IrOx) and poly(3,4-ethylenedioxythiophene) (PEDOT). Our findings provide the concrete experimental proof to the much discussed possibility that carbon nanotube composites can serve as excellent new material for neural interfacing with a strong possibility to lead to a new generation of implantable electrodes. PMID:19785391
Wolf, Max; Krause, Jens; Carney, Patricia A; Bogart, Andy; Kurvers, Ralf H J M
2015-01-01
While collective intelligence (CI) is a powerful approach to increase decision accuracy, few attempts have been made to unlock its potential in medical decision-making. Here we investigated the performance of three well-known collective intelligence rules ("majority", "quorum", and "weighted quorum") when applied to mammography screening. For any particular mammogram, these rules aggregate the independent assessments of multiple radiologists into a single decision (recall the patient for additional workup or not). We found that, compared to single radiologists, any of these CI-rules both increases true positives (i.e., recalls of patients with cancer) and decreases false positives (i.e., recalls of patients without cancer), thereby overcoming one of the fundamental limitations to decision accuracy that individual radiologists face. Importantly, we find that all CI-rules systematically outperform even the best-performing individual radiologist in the respective group. Our findings demonstrate that CI can be employed to improve mammography screening; similarly, CI may have the potential to improve medical decision-making in a much wider range of contexts, including many areas of diagnostic imaging and, more generally, diagnostic decisions that are based on the subjective interpretation of evidence. PMID:26267331
A Mozart is not a Pavarotti: singers outperform instrumentalists on foreign accent imitation
Christiner, Markus; Reiterer, Susanne Maria
2015-01-01
Recent findings have shown that people with higher musical aptitude were also better in oral language imitation tasks. However, whether singing capacity and instrument playing contribute differently to the imitation of speech has been ignored so far. Research has just recently started to understand that instrumentalists develop quite distinct skills when compared to vocalists. In the same vein the role of the vocal motor system in language acquisition processes has poorly been investigated as most investigations (neurobiological and behavioral) favor to examine speech perception. We set out to test whether the vocal motor system can influence an ability to learn, produce and perceive new languages by contrasting instrumentalists and vocalists. Therefore, we investigated 96 participants, 27 instrumentalists, 33 vocalists and 36 non-musicians/non-singers. They were tested for their abilities to imitate foreign speech: unknown language (Hindi), second language (English) and their musical aptitude. Results revealed that both instrumentalists and vocalists have a higher ability to imitate unintelligible speech and foreign accents than non-musicians/non-singers. Within the musician group, vocalists outperformed instrumentalists significantly. Conclusion: First, adaptive plasticity for speech imitation is not reliant on audition alone but also on vocal-motor induced processes. Second, vocal flexibility of singers goes together with higher speech imitation aptitude. Third, vocal motor training, as of singers, may speed up foreign language acquisition processes. PMID:26379537
An Efficient Supervised Training Algorithm for Multilayer Spiking Neural Networks
Xie, Xiurui; Qu, Hong; Liu, Guisong; Zhang, Malu; Kurths, Jürgen
2016-01-01
The spiking neural networks (SNNs) are the third generation of neural networks and perform remarkably well in cognitive tasks such as pattern recognition. The spike emitting and information processing mechanisms found in biological cognitive systems motivate the application of the hierarchical structure and temporal encoding mechanism in spiking neural networks, which have exhibited strong computational capability. However, the hierarchical structure and temporal encoding approach require neurons to process information serially in space and time respectively, which reduce the training efficiency significantly. For training the hierarchical SNNs, most existing methods are based on the traditional back-propagation algorithm, inheriting its drawbacks of the gradient diffusion and the sensitivity on parameters. To keep the powerful computation capability of the hierarchical structure and temporal encoding mechanism, but to overcome the low efficiency of the existing algorithms, a new training algorithm, the Normalized Spiking Error Back Propagation (NSEBP) is proposed in this paper. In the feedforward calculation, the output spike times are calculated by solving the quadratic function in the spike response model instead of detecting postsynaptic voltage states at all time points in traditional algorithms. Besides, in the feedback weight modification, the computational error is propagated to previous layers by the presynaptic spike jitter instead of the gradient decent rule, which realizes the layer-wised training. Furthermore, our algorithm investigates the mathematical relation between the weight variation and voltage error change, which makes the normalization in the weight modification applicable. Adopting these strategies, our algorithm outperforms the traditional SNN multi-layer algorithms in terms of learning efficiency and parameter sensitivity, that are also demonstrated by the comprehensive experimental results in this paper. PMID:27044001
An Efficient Supervised Training Algorithm for Multilayer Spiking Neural Networks.
Xie, Xiurui; Qu, Hong; Liu, Guisong; Zhang, Malu; Kurths, Jürgen
2016-01-01
The spiking neural networks (SNNs) are the third generation of neural networks and perform remarkably well in cognitive tasks such as pattern recognition. The spike emitting and information processing mechanisms found in biological cognitive systems motivate the application of the hierarchical structure and temporal encoding mechanism in spiking neural networks, which have exhibited strong computational capability. However, the hierarchical structure and temporal encoding approach require neurons to process information serially in space and time respectively, which reduce the training efficiency significantly. For training the hierarchical SNNs, most existing methods are based on the traditional back-propagation algorithm, inheriting its drawbacks of the gradient diffusion and the sensitivity on parameters. To keep the powerful computation capability of the hierarchical structure and temporal encoding mechanism, but to overcome the low efficiency of the existing algorithms, a new training algorithm, the Normalized Spiking Error Back Propagation (NSEBP) is proposed in this paper. In the feedforward calculation, the output spike times are calculated by solving the quadratic function in the spike response model instead of detecting postsynaptic voltage states at all time points in traditional algorithms. Besides, in the feedback weight modification, the computational error is propagated to previous layers by the presynaptic spike jitter instead of the gradient decent rule, which realizes the layer-wised training. Furthermore, our algorithm investigates the mathematical relation between the weight variation and voltage error change, which makes the normalization in the weight modification applicable. Adopting these strategies, our algorithm outperforms the traditional SNN multi-layer algorithms in terms of learning efficiency and parameter sensitivity, that are also demonstrated by the comprehensive experimental results in this paper. PMID:27044001
Evaluating super resolution algorithms
NASA Astrophysics Data System (ADS)
Kim, Youn Jin; Park, Jong Hyun; Shin, Gun Shik; Lee, Hyun-Seung; Kim, Dong-Hyun; Park, Se Hyeok; Kim, Jaehyun
2011-01-01
This study intends to establish a sound testing and evaluation methodology based upon the human visual characteristics for appreciating the image restoration accuracy; in addition to comparing the subjective results with predictions by some objective evaluation methods. In total, six different super resolution (SR) algorithms - such as iterative back-projection (IBP), robust SR, maximum a posteriori (MAP), projections onto convex sets (POCS), a non-uniform interpolation, and frequency domain approach - were selected. The performance comparison between the SR algorithms in terms of their restoration accuracy was carried out through both subjectively and objectively. The former methodology relies upon the paired comparison method that involves the simultaneous scaling of two stimuli with respect to image restoration accuracy. For the latter, both conventional image quality metrics and color difference methods are implemented. Consequently, POCS and a non-uniform interpolation outperformed the others for an ideal situation, while restoration based methods appear more accurate to the HR image in a real world case where any prior information about the blur kernel is remained unknown. However, the noise-added-image could not be restored successfully by any of those methods. The latest International Commission on Illumination (CIE) standard color difference equation CIEDE2000 was found to predict the subjective results accurately and outperformed conventional methods for evaluating the restoration accuracy of those SR algorithms.
Zhu, Tingting; Pimentel, Marco A F; Clifford, Gari D; Clifton, David A
2015-08-01
Respiratory rate (RR) is a key vital sign that is monitored to assess the health of patients. With the increase of the availability of wearable devices, it is important that RR is extracted in a robust and noninvasive manner from the photoplethysmogram (PPG) acquired from pulse oximeters and similar devices. However, existing methods of noninvasive RR estimation suffer from a lack of robustness, resulting in the fact that they are not used in clinical practice. We propose a Bayesian approach to fusing the outputs of many RR estimation algorithms to improve the overall robustness of the resulting estimates. Our method estimates the accuracy of each algorithm and jointly infers the fused RR estimate in an unsupervised manner, with aim of producing a fused estimate that is more accurate than any of the algorithms taken individually. This approach is novel in the literature, where the latter has so far concentrated on attempting to produce single algorithms for RR estimation, without resulting in systems that have penetrated into clinical practice. A publicly-available dataset, Capnobase, was used to validate the performance of our proposed model. Our proposed methodology was compared to the best-performing individual algorithm from the literature, as well as to the results of using common fusing methodologies such as averaging, median, and maximum likelihood (ML). Our proposed methodology resulted in a mean-absolute-error (MAE) of 1.98 breaths per minute (bpm), outperformed other fusing strategies (mean fusion: 2.95 bpm; median fusion: 2.33 bpm; ML: 2.30 bpm). It also outperformed the best single algorithm (2.39 bpm) and the benchmark algorithm proposed for use with Capnobase (2.22 bpm). We conclude that the proposed fusion methodology can be used to combine RR estimates from multiple sources derived from the PPG, to infer a reliable and robust estimation of the respiratory rate in an unsupervised manner. PMID:26737693
Cheng, Kin-On; Law, Ngai-Fong; Siu, Wan-Chi; Liew, Alan Wee-Chung
2008-01-01
Background The DNA microarray technology allows the measurement of expression levels of thousands of genes under tens/hundreds of different conditions. In microarray data, genes with similar functions usually co-express under certain conditions only [1]. Thus, biclustering which clusters genes and conditions simultaneously is preferred over the traditional clustering technique in discovering these coherent genes. Various biclustering algorithms have been developed using different bicluster formulations. Unfortunately, many useful formulations result in NP-complete problems. In this article, we investigate an efficient method for identifying a popular type of biclusters called additive model. Furthermore, parallel coordinate (PC) plots are used for bicluster visualization and analysis. Results We develop a novel and efficient biclustering algorithm which can be regarded as a greedy version of an existing algorithm known as pCluster algorithm. By relaxing the constraint in homogeneity, the proposed algorithm has polynomial-time complexity in the worst case instead of exponential-time complexity as in the pCluster algorithm. Experiments on artificial datasets verify that our algorithm can identify both additive-related and multiplicative-related biclusters in the presence of overlap and noise. Biologically significant biclusters have been validated on the yeast cell-cycle expression dataset using Gene Ontology annotations. Comparative study shows that the proposed approach outperforms several existing biclustering algorithms. We also provide an interactive exploratory tool based on PC plot visualization for determining the parameters of our biclustering algorithm. Conclusion We have proposed a novel biclustering algorithm which works with PC plots for an interactive exploratory analysis of gene expression data. Experiments show that the biclustering algorithm is efficient and is capable of detecting co-regulated genes. The interactive analysis enables an optimum
Győrffy, Balázs; Karn, Thomas; Sztupinszki, Zsófia; Weltz, Boglárka; Müller, Volkmar; Pusztai, Lajos
2015-01-01
The molecular diversity of breast cancer makes it impossible to identify prognostic markers that are applicable to all breast cancers. To overcome limitations of previous multigene prognostic classifiers, we propose a new dynamic predictor: instead of using a single universal training cohort and an identical list of informative genes to predict the prognosis of new cases, a case-specific predictor is developed for each test case. Gene expression data from 3,534 breast cancers with clinical annotation including relapse-free survival is analyzed. For each test case, we select a case-specific training subset including only molecularly similar cases and a case-specific predictor is generated. This method yields different training sets and different predictors for each new patient. The model performance was assessed in leave-one-out validation and also in 325 independent cases. Prognostic discrimination was high for all cases (n = 3,534, HR = 3.68, p = 1.67 E−56). The dynamic predictor showed higher overall accuracy (0.68) than genomic surrogates for Oncotype DX (0.64), Genomic Grade Index (0.61) or MammaPrint (0.47). The dynamic predictor was also effective in triple-negative cancers (n = 427, HR = 3.08, p = 0.0093) where the above classifiers all failed. Validation in independent patients yielded similar classification power (HR = 3.57). The dynamic classifier is available online at http://www.recurrenceonline.com/?q=Re_training. In summary, we developed a new method to make personalized prognostic prediction using case-specific training cohorts. The dynamic predictors outperform static models developed from single historical training cohorts and they also predict well in triple-negative cancers. PMID:25274406
Do Cultivated Varieties of Native Plants Have the Ability to Outperform Their Wild Relatives?
Schröder, Roland; Prasse, Rüdiger
2013-01-01
Vast amounts of cultivars of native plants are annually introduced into the semi-natural range of their wild relatives for re-vegetation and restoration. As cultivars are often selected towards enhanced biomass production and might transfer these traits into wild relatives by hybridization, it is suggested that cultivars and the wild × cultivar hybrids are competitively superior to their wild relatives. The release of such varieties may therefore result in unintended changes in native vegetation. In this study we examined for two species frequently used in re-vegetation (Plantago lanceolata and Lotus corniculatus) whether cultivars and artificially generated intra-specific wild × cultivar hybrids may produce a higher vegetative and generative biomass than their wilds. For that purpose a competition experiment was conducted for two growing seasons in a common garden. Every plant type was growing (a.) alone, (b.) in pairwise combination with a similar plant type and (c.) in pairwise interaction with a different plant type. When competing with wilds cultivars of both species showed larger biomass production than their wilds in the first year only and hybrids showed larger biomass production than their wild relatives in both study years. As biomass production is an important factor determining fitness and competitive ability, we conclude that cultivars and hybrids are competitively superior their wild relatives. However, cultivars of both species experienced large fitness reductions (nearly complete mortality in L. corniculatus) due to local climatic conditions. We conclude that cultivars are good competitors only as long as they are not subjected to stressful environmental factors. As hybrids seemed to inherit both the ability to cope with the local climatic conditions from their wild parents as well as the enhanced competitive strength from their cultivars, we regard them as strong competitors and assume that they are able to outperform their wilds at least over
ERIC Educational Resources Information Center
Southam-Gerow, Michael A.; Weisz, John R.; Chu, Brian C.; McLeod, Bryce D.; Gordis, Elana B.; Connor-Smith, Jennifer K.
2010-01-01
Objective: Most tests of cognitive behavioral therapy (CBT) for youth anxiety disorders have shown beneficial effects, but these have been efficacy trials with recruited youths treated by researcher-employed therapists. One previous (nonrandomized) trial in community clinics found that CBT did not outperform usual care (UC). The present study used…
ERIC Educational Resources Information Center
Agodini, Roberto; Harris, Barbara; Remillard, Janine; Thomas, Melissa
2013-01-01
This appendix provides the details that underlie the analyses reported in the evaluation brief, "After Two Years, Three Elementary Math Curricula Outperform a Fourth." The details are organized in six sections: Study Curricula and Design (Section A), Data Collection (Section B), Construction of the Analysis File (Section C), Curriculum Effects on…
NASA Astrophysics Data System (ADS)
Zhou, Guiyun; Sun, Zhongxuan; Fu, Suhua
2016-05-01
Depressions are common features in raster digital elevation models (DEMs) and they are usually filled for the automatic extraction of drainage networks. Among existing algorithms for filling depressions, the Priority-Flood algorithm substantially outperforms other algorithms in terms of both time complexity and memory requirement. The Priority-Flood algorithm uses a priority queue to process cells. This study proposes an efficient variant of the Priority-Flood algorithm, which considerably reduces the number of cells processed by the priority queue by using region-growing procedures to process the majority of cells not within depressions or flat regions. We present three implementations of the proposed variant: two-pass implementation, one-pass implementation and direct implementation. Experiments are conducted on thirty DEMs with a resolution of 3m. All three implementations run faster than existing variants of the algorithm for all tested DEMs. The one-pass implementation runs the fastest and the average speed-up over the fastest existing variant is 44.6%.
The Chopthin Algorithm for Resampling
NASA Astrophysics Data System (ADS)
Gandy, Axel; Lau, F. Din-Houn
2016-08-01
Resampling is a standard step in particle filters and more generally sequential Monte Carlo methods. We present an algorithm, called chopthin, for resampling weighted particles. In contrast to standard resampling methods the algorithm does not produce a set of equally weighted particles; instead it merely enforces an upper bound on the ratio between the weights. Simulation studies show that the chopthin algorithm consistently outperforms standard resampling methods. The algorithms chops up particles with large weight and thins out particles with low weight, hence its name. It implicitly guarantees a lower bound on the effective sample size. The algorithm can be implemented efficiently, making it practically useful. We show that the expected computational effort is linear in the number of particles. Implementations for C++, R (on CRAN), Python and Matlab are available.
Computations and algorithms in physical and biological problems
NASA Astrophysics Data System (ADS)
Qin, Yu
This dissertation presents the applications of state-of-the-art computation techniques and data analysis algorithms in three physical and biological problems: assembling DNA pieces, optimizing self-assembly yield, and identifying correlations from large multivariate datasets. In the first topic, in-depth analysis of using Sequencing by Hybridization (SBH) to reconstruct target DNA sequences shows that a modified reconstruction algorithm can overcome the theoretical boundary without the need for different types of biochemical assays and is robust to error. In the second topic, consistent with theoretical predictions, simulations using Graphics Processing Unit (GPU) demonstrate how controlling the short-ranged interactions between particles and controlling the concentrations optimize the self-assembly yield of a desired structure, and nonequilibrium behavior when optimizing concentrations is also unveiled by leveraging the computation capacity of GPUs. In the last topic, a methodology to incorporate existing categorization information into the search process to efficiently reconstruct the optimal true correlation matrix for multivariate datasets is introduced. Simulations on both synthetic and real financial datasets show that the algorithm is able to detect signals below the Random Matrix Theory (RMT) threshold. These three problems are representatives of using massive computation techniques and data analysis algorithms to tackle optimization problems, and outperform theoretical boundary when incorporating prior information into the computation.
GPS-Free Localization Algorithm for Wireless Sensor Networks
Wang, Lei; Xu, Qingzheng
2010-01-01
Localization is one of the most fundamental problems in wireless sensor networks, since the locations of the sensor nodes are critical to both network operations and most application level tasks. A GPS-free localization scheme for wireless sensor networks is presented in this paper. First, we develop a standardized clustering-based approach for the local coordinate system formation wherein a multiplication factor is introduced to regulate the number of master and slave nodes and the degree of connectivity among master nodes. Second, using homogeneous coordinates, we derive a transformation matrix between two Cartesian coordinate systems to efficiently merge them into a global coordinate system and effectively overcome the flip ambiguity problem. The algorithm operates asynchronously without a centralized controller; and does not require that the location of the sensors be known a priori. A set of parameter-setting guidelines for the proposed algorithm is derived based on a probability model and the energy requirements are also investigated. A simulation analysis on a specific numerical example is conducted to validate the mathematical analytical results. We also compare the performance of the proposed algorithm under a variety multiplication factor, node density and node communication radius scenario. Experiments show that our algorithm outperforms existing mechanisms in terms of accuracy and convergence time. PMID:22219694
GPS-free localization algorithm for wireless sensor networks.
Wang, Lei; Xu, Qingzheng
2010-01-01
Localization is one of the most fundamental problems in wireless sensor networks, since the locations of the sensor nodes are critical to both network operations and most application level tasks. A GPS-free localization scheme for wireless sensor networks is presented in this paper. First, we develop a standardized clustering-based approach for the local coordinate system formation wherein a multiplication factor is introduced to regulate the number of master and slave nodes and the degree of connectivity among master nodes. Second, using homogeneous coordinates, we derive a transformation matrix between two Cartesian coordinate systems to efficiently merge them into a global coordinate system and effectively overcome the flip ambiguity problem. The algorithm operates asynchronously without a centralized controller; and does not require that the location of the sensors be known a priori. A set of parameter-setting guidelines for the proposed algorithm is derived based on a probability model and the energy requirements are also investigated. A simulation analysis on a specific numerical example is conducted to validate the mathematical analytical results. We also compare the performance of the proposed algorithm under a variety multiplication factor, node density and node communication radius scenario. Experiments show that our algorithm outperforms existing mechanisms in terms of accuracy and convergence time. PMID:22219694
Schwarz, Christopher G; Reid, Robert I; Gunter, Jeffrey L; Senjem, Matthew L; Przybelski, Scott A; Zuk, Samantha M; Whitwell, Jennifer L; Vemuri, Prashanthi; Josephs, Keith A; Kantarci, Kejal; Thompson, Paul M; Petersen, Ronald C; Jack, Clifford R
2014-07-01
Tract-Based Spatial Statistics (TBSS) is a popular software pipeline to coregister sets of diffusion tensor Fractional Anisotropy (FA) images for performing voxel-wise comparisons. It is primarily defined by its skeleton projection step intended to reduce effects of local misregistration. A white matter "skeleton" is computed by morphological thinning of the inter-subject mean FA, and then all voxels are projected to the nearest location on this skeleton. Here we investigate several enhancements to the TBSS pipeline based on recent advances in registration for other modalities, principally based on groupwise registration with the ANTS-SyN algorithm. We validate these enhancements using simulation experiments with synthetically-modified images. When used with these enhancements, we discover that TBSS's skeleton projection step actually reduces algorithm accuracy, as the improved registration leaves fewer errors to warrant correction, and the effects of this projection's compromises become stronger than those of its benefits. In our experiments, our proposed pipeline without skeleton projection is more sensitive for detecting true changes and has greater specificity in resisting false positives from misregistration. We also present comparative results of the proposed and traditional methods, both with and without the skeleton projection step, on three real-life datasets: two comparing differing populations of Alzheimer's disease patients to matched controls, and one comparing progressive supranuclear palsy patients to matched controls. The proposed pipeline produces more plausible results according to each disease's pathophysiology. PMID:24650605
Dietz, Bart; van Knippenberg, Daan; Hirst, Giles; Restubog, Simon Lloyd D
2015-11-01
Performance-prove goal orientation affects performance because it drives people to try to outperform others. A proper understanding of the performance-motivating potential of performance-prove goal orientation requires, however, that we consider the question of whom people desire to outperform. In a multilevel analysis of this issue, we propose that the shared team identification of a team plays an important moderating role here, directing the performance-motivating influence of performance-prove goal orientation to either the team level or the individual level of performance. A multilevel study of salespeople nested in teams supports this proposition, showing that performance-prove goal orientation motivates team performance more with higher shared team identification, whereas performance-prove goal orientation motivates individual performance more with lower shared team identification. Establishing the robustness of these findings, a second study replicates them with individual and team performance in an educational context. PMID:26011723
Algorithm for protecting light-trees in survivable mesh wavelength-division-multiplexing networks
NASA Astrophysics Data System (ADS)
Luo, Hongbin; Li, Lemin; Yu, Hongfang
2006-12-01
Wavelength-division-multiplexing (WDM) technology is expected to facilitate bandwidth-intensive multicast applications such as high-definition television. A single fiber cut in a WDM mesh network, however, can disrupt the dissemination of information to several destinations on a light-tree based multicast session. Thus it is imperative to protect multicast sessions by reserving redundant resources. We propose a novel and efficient algorithm for protecting light-trees in survivable WDM mesh networks. The algorithm is called segment-based protection with sister node first (SSNF), whose basic idea is to protect a light-tree using a set of backup segments with a higher priority to protect the segments from a branch point to its children (sister nodes). The SSNF algorithm differs from the segment protection scheme proposed in the literature in how the segments are identified and protected. Our objective is to minimize the network resources used for protecting each primary light-tree such that the blocking probability can be minimized. To verify the effectiveness of the SSNF algorithm, we conduct extensive simulation experiments. The simulation results demonstrate that the SSNF algorithm outperforms existing algorithms for the same problem.
Feature weighted naïve Bayes algorithm for information retrieval of enterprise systems
NASA Astrophysics Data System (ADS)
Wang, Li; Ji, Ping; Qi, Jing; Shan, Siqing; Bi, Zhuming; Deng, Weiguo; Zhang, Naijing
2014-01-01
Automated information retrieval is critical for enterprise information systems to acquire knowledge from the vast amount of data sets. One challenge in information retrieval is text classification. Current practices rely heavily on the classical naïve Bayes algorithm due to its simplicity and robustness. However, results from this algorithm are not always satisfactory. In this article, the limitations of the naïve Bayes algorithm are discussed, and it is found that the assumption on the independence of terms is the main reason for an unsatisfactory classification in many real-world applications. To overcome the limitations, the dependent factors are considered by integrating a term frequency-inverse document frequency (TF-IDF) weighting algorithm in the naïve Bayes classification. Moreover, the TF-IDF algorithm itself is improved so that both frequencies and distribution information are taken into consideration. To illustrate the effectiveness of the proposed method, two simulation experiments were conducted, and the comparisons with other classification methods have shown that the proposed method has outperformed other existing algorithms in terms of precision and index recall rate.
A hybrid multi-objective particle swarm algorithm for a mixed-model assembly line sequencing problem
NASA Astrophysics Data System (ADS)
Rahimi-Vahed, A. R.; Mirghorbani, S. M.; Rabbani, M.
2007-12-01
Mixed-model assembly line sequencing is one of the most important strategic problems in the field of production management where diversified customers' demands exist. In this article, three major goals are considered: (i) total utility work, (ii) total production rate variation and (iii) total setup cost. Due to the complexity of the problem, a hybrid multi-objective algorithm based on particle swarm optimization (PSO) and tabu search (TS) is devised to obtain the locally Pareto-optimal frontier where simultaneous minimization of the above-mentioned objectives is desired. In order to validate the performance of the proposed algorithm in terms of solution quality and diversity level, the algorithm is applied to various test problems and its reliability, based on different comparison metrics, is compared with three prominent multi-objective genetic algorithms, PS-NC GA, NSGA-II and SPEA-II. The computational results show that the proposed hybrid algorithm significantly outperforms existing genetic algorithms in large-sized problems.
3D printed cellular solid outperforms traditional stochastic foam in long-term mechanical response
NASA Astrophysics Data System (ADS)
Maiti, A.; Small, W.; Lewicki, J. P.; Weisgraber, T. H.; Duoss, E. B.; Chinn, S. C.; Pearson, M. A.; Spadaccini, C. M.; Maxwell, R. S.; Wilson, T. S.
2016-04-01
3D printing of polymeric foams by direct-ink-write is a recent technological breakthrough that enables the creation of versatile compressible solids with programmable microstructure, customizable shapes, and tunable mechanical response including negative elastic modulus. However, in many applications the success of these 3D printed materials as a viable replacement for traditional stochastic foams critically depends on their mechanical performance and micro-architectural stability while deployed under long-term mechanical strain. To predict the long-term performance of the two types of foams we employed multi-year-long accelerated aging studies under compressive strain followed by a time-temperature-superposition analysis using a minimum-arc-length-based algorithm. The resulting master curves predict superior long-term performance of the 3D printed foam in terms of two different metrics, i.e., compression set and load retention. To gain deeper understanding, we imaged the microstructure of both foams using X-ray computed tomography, and performed finite-element analysis of the mechanical response within these microstructures. This indicates a wider stress variation in the stochastic foam with points of more extreme local stress as compared to the 3D printed material, which might explain the latter’s improved long-term stability and mechanical performance.
3D printed cellular solid outperforms traditional stochastic foam in long-term mechanical response.
Maiti, A; Small, W; Lewicki, J P; Weisgraber, T H; Duoss, E B; Chinn, S C; Pearson, M A; Spadaccini, C M; Maxwell, R S; Wilson, T S
2016-01-01
3D printing of polymeric foams by direct-ink-write is a recent technological breakthrough that enables the creation of versatile compressible solids with programmable microstructure, customizable shapes, and tunable mechanical response including negative elastic modulus. However, in many applications the success of these 3D printed materials as a viable replacement for traditional stochastic foams critically depends on their mechanical performance and micro-architectural stability while deployed under long-term mechanical strain. To predict the long-term performance of the two types of foams we employed multi-year-long accelerated aging studies under compressive strain followed by a time-temperature-superposition analysis using a minimum-arc-length-based algorithm. The resulting master curves predict superior long-term performance of the 3D printed foam in terms of two different metrics, i.e., compression set and load retention. To gain deeper understanding, we imaged the microstructure of both foams using X-ray computed tomography, and performed finite-element analysis of the mechanical response within these microstructures. This indicates a wider stress variation in the stochastic foam with points of more extreme local stress as compared to the 3D printed material, which might explain the latter's improved long-term stability and mechanical performance. PMID:27117858
Bayesian Markov models consistently outperform PWMs at predicting motifs in nucleotide sequences.
Siebert, Matthias; Söding, Johannes
2016-07-27
Position weight matrices (PWMs) are the standard model for DNA and RNA regulatory motifs. In PWMs nucleotide probabilities are independent of nucleotides at other positions. Models that account for dependencies need many parameters and are prone to overfitting. We have developed a Bayesian approach for motif discovery using Markov models in which conditional probabilities of order k - 1 act as priors for those of order k This Bayesian Markov model (BaMM) training automatically adapts model complexity to the amount of available data. We also derive an EM algorithm for de-novo discovery of enriched motifs. For transcription factor binding, BaMMs achieve significantly (P = 1/16) higher cross-validated partial AUC than PWMs in 97% of 446 ChIP-seq ENCODE datasets and improve performance by 36% on average. BaMMs also learn complex multipartite motifs, improving predictions of transcription start sites, polyadenylation sites, bacterial pause sites, and RNA binding sites by 26-101%. BaMMs never performed worse than PWMs. These robust improvements argue in favour of generally replacing PWMs by BaMMs. PMID:27288444
3D printed cellular solid outperforms traditional stochastic foam in long-term mechanical response
Maiti, A.; Small, W.; Lewicki, J.; Weisgraber, T. H.; Duoss, E. B.; Chinn, S. C.; Pearson, M. A.; Spadaccini, C. M.; Maxwell, R. S.; Wilson, T. S.
2016-04-27
3D printing of polymeric foams by direct-ink-write is a recent technological breakthrough that enables the creation of versatile compressible solids with programmable microstructure, customizable shapes, and tunable mechanical response including negative elastic modulus. However, in many applications the success of these 3D printed materials as a viable replacement for traditional stochastic foams critically depends on their mechanical performance and micro-architectural stability while deployed under long-term mechanical strain. To predict the long-term performance of the two types of foams we employed multi-year-long accelerated aging studies under compressive strain followed by a time-temperature-superposition analysis using a minimum-arc-length-based algorithm. The resulting master curvesmore » predict superior long-term performance of the 3D printed foam in terms of two different metrics, i.e., compression set and load retention. To gain deeper understanding, we imaged the microstructure of both foams using X-ray computed tomography, and performed finite-element analysis of the mechanical response within these microstructures. As a result, this indicates a wider stress variation in the stochastic foam with points of more extreme local stress as compared to the 3D printed material, which might explain the latter’s improved long-term stability and mechanical performance.« less
3D printed cellular solid outperforms traditional stochastic foam in long-term mechanical response
Maiti, A.; Small, W.; Lewicki, J. P.; Weisgraber, T. H.; Duoss, E. B.; Chinn, S. C.; Pearson, M. A.; Spadaccini, C. M.; Maxwell, R. S.; Wilson, T. S.
2016-01-01
3D printing of polymeric foams by direct-ink-write is a recent technological breakthrough that enables the creation of versatile compressible solids with programmable microstructure, customizable shapes, and tunable mechanical response including negative elastic modulus. However, in many applications the success of these 3D printed materials as a viable replacement for traditional stochastic foams critically depends on their mechanical performance and micro-architectural stability while deployed under long-term mechanical strain. To predict the long-term performance of the two types of foams we employed multi-year-long accelerated aging studies under compressive strain followed by a time-temperature-superposition analysis using a minimum-arc-length-based algorithm. The resulting master curves predict superior long-term performance of the 3D printed foam in terms of two different metrics, i.e., compression set and load retention. To gain deeper understanding, we imaged the microstructure of both foams using X-ray computed tomography, and performed finite-element analysis of the mechanical response within these microstructures. This indicates a wider stress variation in the stochastic foam with points of more extreme local stress as compared to the 3D printed material, which might explain the latter’s improved long-term stability and mechanical performance. PMID:27117858
An Evolved Wavelet Library Based on Genetic Algorithm
Vaithiyanathan, D.; Seshasayanan, R.; Kunaraj, K.; Keerthiga, J.
2014-01-01
As the size of the images being captured increases, there is a need for a robust algorithm for image compression which satiates the bandwidth limitation of the transmitted channels and preserves the image resolution without considerable loss in the image quality. Many conventional image compression algorithms use wavelet transform which can significantly reduce the number of bits needed to represent a pixel and the process of quantization and thresholding further increases the compression. In this paper the authors evolve two sets of wavelet filter coefficients using genetic algorithm (GA), one for the whole image portion except the edge areas and the other for the portions near the edges in the image (i.e., global and local filters). Images are initially separated into several groups based on their frequency content, edges, and textures and the wavelet filter coefficients are evolved separately for each group. As there is a possibility of the GA settling in local maximum, we introduce a new shuffling operator to prevent the GA from this effect. The GA used to evolve filter coefficients primarily focuses on maximizing the peak signal to noise ratio (PSNR). The evolved filter coefficients by the proposed method outperform the existing methods by a 0.31 dB improvement in the average PSNR and a 0.39 dB improvement in the maximum PSNR. PMID:25405225
Kwan, Mei-Po; Xiao, Ningchuan; Ding, Guoxiang
2015-01-01
Due to the complexity and multidimensional characteristics of human activities, assessing the similarity of human activity patterns and classifying individuals with similar patterns remains highly challenging. This paper presents a new and unique methodology for evaluating the similarity among individual activity patterns. It conceptualizes multidimensional sequence alignment (MDSA) as a multiobjective optimization problem, and solves this problem with an evolutionary algorithm. The study utilizes sequence alignment to code multiple facets of human activities into multidimensional sequences, and to treat similarity assessment as a multiobjective optimization problem that aims to minimize the alignment cost for all dimensions simultaneously. A multiobjective optimization evolutionary algorithm (MOEA) is used to generate a diverse set of optimal or near-optimal alignment solutions. Evolutionary operators are specifically designed for this problem, and a local search method also is incorporated to improve the search ability of the algorithm. We demonstrate the effectiveness of our method by comparing it with a popular existing method called ClustalG using a set of 50 sequences. The results indicate that our method outperforms the existing method for most of our selected cases. The multiobjective evolutionary algorithm presented in this paper provides an effective approach for assessing activity pattern similarity, and a foundation for identifying distinctive groups of individuals with similar activity patterns. PMID:26190858
Complexity of the Quantum Adiabatic Algorithm
NASA Technical Reports Server (NTRS)
Hen, Itay
2013-01-01
The Quantum Adiabatic Algorithm (QAA) has been proposed as a mechanism for efficiently solving optimization problems on a quantum computer. Since adiabatic computation is analog in nature and does not require the design and use of quantum gates, it can be thought of as a simpler and perhaps more profound method for performing quantum computations that might also be easier to implement experimentally. While these features have generated substantial research in QAA, to date there is still a lack of solid evidence that the algorithm can outperform classical optimization algorithms.
Natora, Michal; Boucsein, Clemens; Munk, Matthias H. J.; Obermayer, Klaus
2009-01-01
For the analysis of neuronal cooperativity, simultaneously recorded extracellular signals from neighboring neurons need to be sorted reliably by a spike sorting method. Many algorithms have been developed to this end, however, to date, none of them manages to fulfill a set of demanding requirements. In particular, it is desirable to have an algorithm that operates online, detects and classifies overlapping spikes in real time, and that adapts to non-stationary data. Here, we present a combined spike detection and classification algorithm, which explicitly addresses these issues. Our approach makes use of linear filters to find a new representation of the data and to optimally enhance the signal-to-noise ratio. We introduce a method called “Deconfusion” which de-correlates the filter outputs and provides source separation. Finally, a set of well-defined thresholds is applied and leads to simultaneous spike detection and spike classification. By incorporating a direct feedback, the algorithm adapts to non-stationary data and is, therefore, well suited for acute recordings. We evaluate our method on simulated and experimental data, including simultaneous intra/extra-cellular recordings made in slices of a rat cortex and recordings from the prefrontal cortex of awake behaving macaques. We compare the results to existing spike detection as well as spike sorting methods. We conclude that our algorithm meets all of the mentioned requirements and outperforms other methods under realistic signal-to-noise ratios and in the presence of overlapping spikes. PMID:19499318
Cascade Error Projection: A New Learning Algorithm
NASA Technical Reports Server (NTRS)
Duong, T. A.; Stubberud, A. R.; Daud, T.; Thakoor, A. P.
1995-01-01
A new neural network architecture and a hardware implementable learning algorithm is proposed. The algorithm, called cascade error projection (CEP), handles lack of precision and circuit noise better than existing algorithms.
Wellwood, Christopher; Rardin, Sean
2014-06-01
One of the many challenges for any physician is determining the correct course of treatment for patients with more than 1 area of complaint. Should the physician treat the symptoms or the underlying cause of a condition? If treating the cause, what and who determines the cause? Further complicating the issue, doctors must succeed in getting patients to follow the prescribed treatment, which has always been and will continue to be an issue in reaching therapeutic goals. In late 2009, a 49-year-old Caucasian woman visited the Natural Health Center of Medical Lake (NHCML) in Medical Lake, WA, complaining of multiple symptoms. One symptom was a goiter that had not been relieved with a prescription for 0.375 mg of Synthroid daily. Her comorbidities included mixed hyperlipidemia; multiple joint pains; alopecia; fatigue; bilateral, lower-extremity edema; and severe gastric disruption with bloating and acid reflux. After initial success from treatment, with a complete reduction of her presenting goiter and most of her other symptoms, the patient withdrew herself from her prescription medication and her nutritional supplementation. After 4 wk, the patient visited NHCML with indications of severe hypothyroidism, including a severely enlarged goiter of the right wing. After 6 wk of treatment with iodine and a glandular nutritional supplement (GTA Forte), her symptoms of severe hypothyroidism abated. Subsequent treatment for adrenal insufficiency, which was diagnosed at NHCML using salivary adrenal stress-index testing for cortisol rhythm and load, allowed complete resolution of her presenting complaints. This result persisted even at the 3-y follow-up to a greater degree than did the results from the use of thyroid nutritional supplementation and Synthroid, both alone and combined. The hypothalamus-pituitary-adrenal (HPA) axis may contribute to the existence of thyroid-type symptoms, particularly for those individuals with subclinical thyroid conditions. The treatment of the
Wellwood, Christopher; Rardin, Sean
2014-01-01
One of the many challenges for any physician is determining the correct course of treatment for patients with more than 1 area of complaint. Should the physician treat the symptoms or the underlying cause of a condition? If treating the cause, what and who determines the cause? Further complicating the issue, doctors must succeed in getting patients to follow the prescribed treatment, which has always been and will continue to be an issue in reaching therapeutic goals. In late 2009, a 49-year-old Caucasian woman visited the Natural Health Center of Medical Lake (NHCML) in Medical Lake, WA, complaining of multiple symptoms. One symptom was a goiter that had not been relieved with a prescription for 0.375 mg of Synthroid daily. Her comorbidities included mixed hyperlipidemia; multiple joint pains; alopecia; fatigue; bilateral, lower-extremity edema; and severe gastric disruption with bloating and acid reflux. After initial success from treatment, with a complete reduction of her presenting goiter and most of her other symptoms, the patient withdrew herself from her prescription medication and her nutritional supplementation. After 4 wk, the patient visited NHCML with indications of severe hypothyroidism, including a severely enlarged goiter of the right wing. After 6 wk of treatment with iodine and a glandular nutritional supplement (GTA Forte), her symptoms of severe hypothyroidism abated. Subsequent treatment for adrenal insufficiency, which was diagnosed at NHCML using salivary adrenal stress-index testing for cortisol rhythm and load, allowed complete resolution of her presenting complaints. This result persisted even at the 3-y follow-up to a greater degree than did the results from the use of thyroid nutritional supplementation and Synthroid, both alone and combined. The hypothalamus-pituitary-adrenal (HPA) axis may contribute to the existence of thyroid-type symptoms, particularly for those individuals with subclinical thyroid conditions. The treatment of the
YAMPA: Yet Another Matching Pursuit Algorithm for compressive sensing
NASA Astrophysics Data System (ADS)
Lodhi, Muhammad A.; Voronin, Sergey; Bajwa, Waheed U.
2016-05-01
State-of-the-art sparse recovery methods often rely on the restricted isometry property for their theoretical guarantees. However, they cannot explicitly incorporate metrics such as restricted isometry constants within their recovery procedures due to the computational intractability of calculating such metrics. This paper formulates an iterative algorithm, termed yet another matching pursuit algorithm (YAMPA), for recovery of sparse signals from compressive measurements. YAMPA differs from other pursuit algorithms in that: (i) it adapts to the measurement matrix using a threshold that is explicitly dependent on two computable coherence metrics of the matrix, and (ii) it does not require knowledge of the signal sparsity. Performance comparisons of YAMPA against other matching pursuit and approximate message passing algorithms are made for several types of measurement matrices. These results show that while state-of-the-art approximate message passing algorithms outperform other algorithms (including YAMPA) in the case of well-conditioned random matrices, they completely break down in the case of ill-conditioned measurement matrices. On the other hand, YAMPA and comparable pursuit algorithms not only result in reasonable performance for well-conditioned matrices, but their performance also degrades gracefully for ill-conditioned matrices. The paper also shows that YAMPA uniformly outperforms other pursuit algorithms for the case of thresholding parameters chosen in a clairvoyant fashion. Further, when combined with a simple and fast technique for selecting thresholding parameters in the case of ill-conditioned matrices, YAMPA outperforms other pursuit algorithms in the regime of low undersampling, although some of these algorithms can outperform YAMPA in the regime of high undersampling in this setting.
NASA Astrophysics Data System (ADS)
Madariaga, Bruce; McConnell, Kenneth E.
1987-05-01
The notion that individuals value the preservation of water resources independent of their own use of these resources is discussed. Issues in defining this value, termed "existence value," are explored. Economic models are employed to assess the role of existence value in benefit-cost analysis. The motives underlying existence value are shown to matter to contingent valuation measurement of existence benefits. A stylized contingent valuation experiment is used to study nonusers' attitudes regarding projects to improve water quality in the Chesapeake Bay. Survey results indicate that altruism is one of the motives underlying existence value and that goods other than environmental and natural resources may provide existence benefits.
A new evolutionary algorithm with structure mutation for the maximum balanced biclique problem.
Yuan, Bo; Li, Bin; Chen, Huanhuan; Yao, Xin
2015-05-01
The maximum balanced biclique problem (MBBP), an NP-hard combinatorial optimization problem, has been attracting more attention in recent years. Existing node-deletion-based algorithms usually fail to find high-quality solutions due to their easy stagnation in local optima, especially when the scale of the problem grows large. In this paper, a new algorithm for the MBBP, evolutionary algorithm with structure mutation (EA/SM), is proposed. In the EA/SM framework, local search complemented with a repair-assisted restart process is adopted. A new mutation operator, SM, is proposed to enhance the exploration during the local search process. The SM can change the structure of solutions dynamically while keeping their size (fitness) and the feasibility unchanged. It implements a kind of large mutation in the structure space of MBBP to help the algorithm escape from local optima. An MBBP-specific local search operator is designed to improve the quality of solutions efficiently; besides, a new repair-assisted restart process is introduced, in which the Marchiori's heuristic repair is modified to repair every new solution reinitialized by an estimation of distribution algorithm (EDA)-like process. The proposed algorithm is evaluated on a large set of benchmark graphs with various scales and densities. Experimental results show that: 1) EA/SM produces significantly better results than the state-of-the-art heuristic algorithms; 2) it also outperforms a repair-based EDA and a repair-based genetic algorithm on all benchmark graphs; and 3) the advantages of EA/SM are mainly due to the introduction of the new SM operator and the new repair-assisted restart process. PMID:25137737
HYBRID FAST HANKEL TRANSFORM ALGORITHM FOR ELECTROMAGNETIC MODELING
A hybrid fast Hankel transform algorithm has been developed that uses several complementary features of two existing algorithms: Anderson's digital filtering or fast Hankel transform (FHT) algorithm and Chave's quadrature and continued fraction algorithm. A hybrid FHT subprogram ...
NASA Technical Reports Server (NTRS)
Tachi, Susumu; Arai, Hirohiko; Maeda, Taro
1989-01-01
Tele-existence is an advanced type of teleoperation system that enables a human operator at the controls to perform remote manipulation tasks dexterously with the feeling that he or she exists in the remote anthropomorphic robot in the remote environment. The concept of a tele-existence is presented, the principle of the tele-existence display method is explained, some of the prototype systems are described, and its space application is discussed.
O'Reilly, Joseph E; Puttick, Mark N; Parry, Luke; Tanner, Alastair R; Tarver, James E; Fleming, James; Pisani, Davide; Donoghue, Philip C J
2016-04-01
Different analytical methods can yield competing interpretations of evolutionary history and, currently, there is no definitive method for phylogenetic reconstruction using morphological data. Parsimony has been the primary method for analysing morphological data, but there has been a resurgence of interest in the likelihood-based Mk-model. Here, we test the performance of the Bayesian implementation of the Mk-model relative to both equal and implied-weight implementations of parsimony. Using simulated morphological data, we demonstrate that the Mk-model outperforms equal-weights parsimony in terms of topological accuracy, and implied-weights performs the most poorly. However, the Mk-model produces phylogenies that have less resolution than parsimony methods. This difference in the accuracy and precision of parsimony and Bayesian approaches to topology estimation needs to be considered when selecting a method for phylogeny reconstruction. PMID:27095266
Puttick, Mark N.; Parry, Luke; Tanner, Alastair R.; Tarver, James E.; Fleming, James
2016-01-01
Different analytical methods can yield competing interpretations of evolutionary history and, currently, there is no definitive method for phylogenetic reconstruction using morphological data. Parsimony has been the primary method for analysing morphological data, but there has been a resurgence of interest in the likelihood-based Mk-model. Here, we test the performance of the Bayesian implementation of the Mk-model relative to both equal and implied-weight implementations of parsimony. Using simulated morphological data, we demonstrate that the Mk-model outperforms equal-weights parsimony in terms of topological accuracy, and implied-weights performs the most poorly. However, the Mk-model produces phylogenies that have less resolution than parsimony methods. This difference in the accuracy and precision of parsimony and Bayesian approaches to topology estimation needs to be considered when selecting a method for phylogeny reconstruction. PMID:27095266
Waaktaar, Trine; Torgersen, Svenn
2010-04-01
This study's aim was to determine whether resilience scales could predict adjustment over and above that predicted by the five-factor model (FFM). A sample of 1,345 adolescents completed paper-and-pencil scales on FFM personality (Hierarchical Personality Inventory for Children), resilience (Ego-Resiliency Scale [ER89] by Block & Kremen, the Resilience Scale [RS] by Wagnild & Young) and adaptive behaviors (California Healthy Kids Survey, UCLA Loneliness Scale and three measures of school adaptation). The results showed that the FFM scales accounted for the highest proportion of variance in disturbance. For adaptation, the resilience scales contributed as much as the FFM. In no case did the resilience scales outperform the FFM by increasing the explained variance. The results challenge the validity of the resilience concept as an indicator of human adaptation and avoidance of disturbance, although the concept may have heuristic value in combining favorable aspects of a person's personality endowment. PMID:19961558
Wang, Jih-Terng; Hsu, Chia-Min; Kuo, Chao-Yang; Meng, Pei-Jie; Kao, Shuh-Ji; Chen, Chaolun Allen
2015-01-01
Terpios hoshinota, an encrusting cyanosponge, is known as a strong substrate competitor of reef-building corals that kills encountered coral by overgrowth. Terpios outbreaks cause significant declines in living coral cover in Indo-Pacific coral reefs, with the damage usually lasting for decades. Recent studies show that there are morphological transformations at a sponge's growth front when confronting corals. Whether these morphological transformations at coral contacts are involved with physiological outperformance (e.g., higher metabolic activity or nutritional status) over other portions of Terpios remains equivocal. In this study, we compared the indicators of photosynthetic capability and nitrogen status of a sponge-cyanobacteria association at proximal, middle, and distal portions of opponent corals. Terpios tissues in contact with corals displayed significant increases in photosynthetic oxygen production (ca. 61%), the δ13C value (ca. 4%), free proteinogenic amino acid content (ca. 85%), and Gln/Glu ratio (ca. 115%) compared to middle and distal parts of the sponge. In contrast, the maximum quantum yield (Fv/Fm), which is the indicator usually used to represent the integrity of photosystem II, of cyanobacteria photosynthesis was low (0.256~0.319) and showed an inverse trend of higher values in the distal portion of the sponge that might be due to high and variable levels of cyanobacterial phycocyanin. The inconsistent results between photosynthetic oxygen production and Fv/Fm values indicated that maximum quantum yields might not be a suitable indicator to represent the photosynthetic function of the Terpios-cyanobacteria association. Our data conclusively suggest that Terpios hoshinota competes with opponent corals not only by the morphological transformation of the sponge-cyanobacteria association but also by physiological outperformance in accumulating resources for the battle. PMID:26110525
Wang, Jih-Terng; Hsu, Chia-Min; Kuo, Chao-Yang; Meng, Pei-Jie; Kao, Shuh-Ji; Chen, Chaolun Allen
2015-01-01
Terpios hoshinota, an encrusting cyanosponge, is known as a strong substrate competitor of reef-building corals that kills encountered coral by overgrowth. Terpios outbreaks cause significant declines in living coral cover in Indo-Pacific coral reefs, with the damage usually lasting for decades. Recent studies show that there are morphological transformations at a sponge’s growth front when confronting corals. Whether these morphological transformations at coral contacts are involved with physiological outperformance (e.g., higher metabolic activity or nutritional status) over other portions of Terpios remains equivocal. In this study, we compared the indicators of photosynthetic capability and nitrogen status of a sponge-cyanobacteria association at proximal, middle, and distal portions of opponent corals. Terpios tissues in contact with corals displayed significant increases in photosynthetic oxygen production (ca. 61%), the δ13C value (ca. 4%), free proteinogenic amino acid content (ca. 85%), and Gln/Glu ratio (ca. 115%) compared to middle and distal parts of the sponge. In contrast, the maximum quantum yield (Fv/Fm), which is the indicator usually used to represent the integrity of photosystem II, of cyanobacteria photosynthesis was low (0.256~0.319) and showed an inverse trend of higher values in the distal portion of the sponge that might be due to high and variable levels of cyanobacterial phycocyanin. The inconsistent results between photosynthetic oxygen production and Fv/Fm values indicated that maximum quantum yields might not be a suitable indicator to represent the photosynthetic function of the Terpios-cyanobacteria association. Our data conclusively suggest that Terpios hoshinota competes with opponent corals not only by the morphological transformation of the sponge-cyanobacteria association but also by physiological outperformance in accumulating resources for the battle. PMID:26110525
Xia, Xuhua
2016-09-01
While pairwise sequence alignment (PSA) by dynamic programming is guaranteed to generate one of the optimal alignments, multiple sequence alignment (MSA) of highly divergent sequences often results in poorly aligned sequences, plaguing all subsequent phylogenetic analysis. One way to avoid this problem is to use only PSA to reconstruct phylogenetic trees, which can only be done with distance-based methods. I compared the accuracy of this new computational approach (named PhyPA for phylogenetics by pairwise alignment) against the maximum likelihood method using MSA (the ML+MSA approach), based on nucleotide, amino acid and codon sequences simulated with different topologies and tree lengths. I present a surprising discovery that the fast PhyPA method consistently outperforms the slow ML+MSA approach for highly diverged sequences even when all optimization options were turned on for the ML+MSA approach. Only when sequences are not highly diverged (i.e., when a reliable MSA can be obtained) does the ML+MSA approach outperforms PhyPA. The true topologies are always recovered by ML with the true alignment from the simulation. However, with MSA derived from alignment programs such as MAFFT or MUSCLE, the recovered topology consistently has higher likelihood than that for the true topology. Thus, the failure to recover the true topology by the ML+MSA is not because of insufficient search of tree space, but by the distortion of phylogenetic signal by MSA methods. I have implemented in DAMBE PhyPA and two approaches making use of multi-gene data sets to derive phylogenetic support for subtrees equivalent to resampling techniques such as bootstrapping and jackknifing. PMID:27377322
Sorting on STAR. [CDC computer algorithm timing comparison
NASA Technical Reports Server (NTRS)
Stone, H. S.
1978-01-01
Timing comparisons are given for three sorting algorithms written for the CDC STAR computer. One algorithm is Hoare's (1962) Quicksort, which is the fastest or nearly the fastest sorting algorithm for most computers. A second algorithm is a vector version of Quicksort that takes advantage of the STAR's vector operations. The third algorithm is an adaptation of Batcher's (1968) sorting algorithm, which makes especially good use of vector operations but has a complexity of N(log N)-squared as compared with a complexity of N log N for the Quicksort algorithms. In spite of its worse complexity, Batcher's sorting algorithm is competitive with the serial version of Quicksort for vectors up to the largest that can be treated by STAR. Vector Quicksort outperforms the other two algorithms and is generally preferred. These results indicate that unusual instruction sets can introduce biases in program execution time that counter results predicted by worst-case asymptotic complexity analysis.
Mutation-Based Artificial Fish Swarm Algorithm for Bound Constrained Global Optimization
NASA Astrophysics Data System (ADS)
Rocha, Ana Maria A. C.; Fernandes, Edite M. G. P.
2011-09-01
The herein presented mutation-based artificial fish swarm (AFS) algorithm includes mutation operators to prevent the algorithm to falling into local solutions, diversifying the search, and to accelerate convergence to the global optima. Three mutation strategies are introduced into the AFS algorithm to define the trial points that emerge from random, leaping and searching behaviors. Computational results show that the new algorithm outperforms other well-known global stochastic solution methods.
Why envy outperforms admiration.
van de Ven, Niels; Zeelenberg, Marcel; Pieters, Rik
2011-06-01
Four studies tested the hypothesis that the emotion of benign envy, but not the emotions of admiration or malicious envy, motivates people to improve themselves. Studies 1 to 3 found that only benign envy was related to the motivation to study more (Study 1) and to actual performance on the Remote Associates Task (which measures intelligence and creativity; Studies 2 and 3). Study 4 found that an upward social comparison triggered benign envy and subsequent better performance only when people thought self-improvement was attainable. When participants thought self-improvement was hard, an upward social comparison led to more admiration and no motivation to do better. Implications of these findings for theories of social emotions such as envy, social comparisons, and for understanding the influence of role models are discussed. PMID:21383070
MEDUSAHEAD OUTPERFORMS SQUIRRETAIL
Technology Transfer Automated Retrieval System (TEKTRAN)
Understanding the ecological processes fostering invasion and dominance by medusahead is central to its management. The objectives of this study were 1) to quantify and compare interference between medusahead and squirreltail under different concentrations of soil N and P and 2) to compare growth r...
Enhanced Landweber algorithm via Bregman iterations for bioluminescence tomography
NASA Astrophysics Data System (ADS)
Xia, Yi; Zhang, Meng
2014-09-01
Bioluminescence tomography (BLT) is an important optical molecular imaging modality aimed at visualizing physiological and pathological processes at cellular and molecular levels. While the forward process of light propagation is described by the diffusion approximation to radiative transfer equation, BLT is the inverse problem to reconstruct the 3D localization and quantification of internal bioluminescent sources distribution. Due to the inherent ill-posedness of the BLT problem, regularization is generally indispensable to obtain more favorable reconstruction. In particular, total variation (TV) regularization is known to be effective for piecewise-constant source distribution which can permit sharp discontinuities and preserve edges. However, total variation regularization generally suffers from the unsatisfactory staircasing effect. In this work, we introduce the Bregman iterative regularization to alleviate this degeneration and enhance the numerical reconstruction of BLT. Based on the existing Landweber method (LM), we put forward the Bregman-LM-TV algorithm for BLT. Numerical experiments are carried out and preliminary simulation results are reported to evaluate the proposed algorithms. It is found that Bregman-LM-TV can significantly outperform the individual Landweber method for BLT when the source distribution is piecewise-constant.
NASA Technical Reports Server (NTRS)
Wang, Lui; Bayer, Steven E.
1991-01-01
Genetic algorithms are mathematical, highly parallel, adaptive search procedures (i.e., problem solving methods) based loosely on the processes of natural genetics and Darwinian survival of the fittest. Basic genetic algorithms concepts are introduced, genetic algorithm applications are introduced, and results are presented from a project to develop a software tool that will enable the widespread use of genetic algorithm technology.
An ant colony optimization based algorithm for identifying gene regulatory elements.
Liu, Wei; Chen, Hanwu; Chen, Ling
2013-08-01
It is one of the most important tasks in bioinformatics to identify the regulatory elements in gene sequences. Most of the existing algorithms for identifying regulatory elements are inclined to converge into a local optimum, and have high time complexity. Ant Colony Optimization (ACO) is a meta-heuristic method based on swarm intelligence and is derived from a model inspired by the collective foraging behavior of real ants. Taking advantage of the ACO in traits such as self-organization and robustness, this paper designs and implements an ACO based algorithm named ACRI (ant-colony-regulatory-identification) for identifying all possible binding sites of transcription factor from the upstream of co-expressed genes. To accelerate the ants' searching process, a strategy of local optimization is presented to adjust the ants' start positions on the searched sequences. By exploiting the powerful optimization ability of ACO, the algorithm ACRI can not only improve precision of the results, but also achieve a very high speed. Experimental results on real world datasets show that ACRI can outperform other traditional algorithms in the respects of speed and quality of solutions. PMID:23746735
The finite state projection algorithm for the solution of the chemical master equation.
Munsky, Brian; Khammash, Mustafa
2006-01-28
This article introduces the finite state projection (FSP) method for use in the stochastic analysis of chemically reacting systems. One can describe the chemical populations of such systems with probability density vectors that evolve according to a set of linear ordinary differential equations known as the chemical master equation (CME). Unlike Monte Carlo methods such as the stochastic simulation algorithm (SSA) or tau leaping, the FSP directly solves or approximates the solution of the CME. If the CME describes a system that has a finite number of distinct population vectors, the FSP method provides an exact analytical solution. When an infinite or extremely large number of population variations is possible, the state space can be truncated, and the FSP method provides a certificate of accuracy for how closely the truncated space approximation matches the true solution. The proposed FSP algorithm systematically increases the projection space in order to meet prespecified tolerance in the total probability density error. For any system in which a sufficiently accurate FSP exists, the FSP algorithm is shown to converge in a finite number of steps. The FSP is utilized to solve two examples taken from the field of systems biology, and comparisons are made between the FSP, the SSA, and tau leaping algorithms. In both examples, the FSP outperforms the SSA in terms of accuracy as well as computational efficiency. Furthermore, due to very small molecular counts in these particular examples, the FSP also performs far more effectively than tau leaping methods. PMID:16460146
A pruning-based disk scheduling algorithm for heterogeneous I/O workloads.
Kim, Taeseok; Bahn, Hyokyung; Won, Youjip
2014-01-01
In heterogeneous I/O workload environments, disk scheduling algorithms should support different QoS (Quality-of-Service) for each I/O request. For example, the algorithm should meet the deadlines of real-time requests and at the same time provide reasonable response time for best-effort requests. This paper presents a novel disk scheduling algorithm called G-SCAN (Grouping-SCAN) for handling heterogeneous I/O workloads. To find a schedule that satisfies the deadline constraints and seek time minimization simultaneously, G-SCAN maintains a series of candidate schedules and expands the schedules whenever a new request arrives. Maintaining these candidate schedules requires excessive spatial and temporal overhead, but G-SCAN reduces the overhead to a manageable level via pruning the state space using two heuristics. One is grouping that clusters adjacent best-effort requests into a single scheduling unit and the other is the branch-and-bound strategy that cuts off inefficient or impractical schedules. Experiments with various synthetic and real-world I/O workloads show that G-SCAN outperforms existing disk scheduling algorithms significantly in terms of the average response time, throughput, and QoS-guarantees for heterogeneous I/O workloads. We also show that the overhead of G-SCAN is reasonable for on-line execution. PMID:24782678
Man vs. computer: benchmarking machine learning algorithms for traffic sign recognition.
Stallkamp, J; Schlipsing, M; Salmen, J; Igel, C
2012-08-01
Traffic signs are characterized by a wide variability in their visual appearance in real-world environments. For example, changes of illumination, varying weather conditions and partial occlusions impact the perception of road signs. In practice, a large number of different sign classes needs to be recognized with very high accuracy. Traffic signs have been designed to be easily readable for humans, who perform very well at this task. For computer systems, however, classifying traffic signs still seems to pose a challenging pattern recognition problem. Both image processing and machine learning algorithms are continuously refined to improve on this task. But little systematic comparison of such systems exist. What is the status quo? Do today's algorithms reach human performance? For assessing the performance of state-of-the-art machine learning algorithms, we present a publicly available traffic sign dataset with more than 50,000 images of German road signs in 43 classes. The data was considered in the second stage of the German Traffic Sign Recognition Benchmark held at IJCNN 2011. The results of this competition are reported and the best-performing algorithms are briefly described. Convolutional neural networks (CNNs) showed particularly high classification accuracies in the competition. We measured the performance of human subjects on the same data-and the CNNs outperformed the human test persons. PMID:22394690
Technology Transfer Automated Retrieval System (TEKTRAN)
The primary advantage of Dynamically Dimensioned Search algorithm (DDS) is that it outperforms many other optimization techniques in both convergence speed and the ability in searching for parameter sets that satisfy statistical guidelines while requiring only one algorithm parameter (perturbation f...
A SAT Based Effective Algorithm for the Directed Hamiltonian Cycle Problem
NASA Astrophysics Data System (ADS)
Jäger, Gerold; Zhang, Weixiong
The Hamiltonian cycle problem (HCP) is an important combinatorial problem with applications in many areas. While thorough theoretical and experimental analyses have been made on the HCP in undirected graphs, little is known for the HCP in directed graphs (DHCP). The contribution of this work is an effective algorithm for the DHCP. Our algorithm explores and exploits the close relationship between the DHCP and the Assignment Problem (AP) and utilizes a technique based on Boolean satisfiability (SAT). By combining effective algorithms for the AP and SAT, our algorithm significantly outperforms previous exact DHCP algorithms including an algorithm based on the award-winning Concorde TSP algorithm.
Wang, Shunfang; Liu, Shuhui
2015-01-01
An effective representation of a protein sequence plays a crucial role in protein sub-nuclear localization. The existing representations, such as dipeptide composition (DipC), pseudo-amino acid composition (PseAAC) and position specific scoring matrix (PSSM), are insufficient to represent protein sequence due to their single perspectives. Thus, this paper proposes two fusion feature representations of DipPSSM and PseAAPSSM to integrate PSSM with DipC and PseAAC, respectively. When constructing each fusion representation, we introduce the balance factors to value the importance of its components. The optimal values of the balance factors are sought by genetic algorithm. Due to the high dimensionality of the proposed representations, linear discriminant analysis (LDA) is used to find its important low dimensional structure, which is essential for classification and location prediction. The numerical experiments on two public datasets with KNN classifier and cross-validation tests showed that in terms of the common indexes of sensitivity, specificity, accuracy and MCC, the proposed fusing representations outperform the traditional representations in protein sub-nuclear localization, and the representation treated by LDA outperforms the untreated one. PMID:26703574
Production scheduling and rescheduling with genetic algorithms.
Bierwirth, C; Mattfeld, D C
1999-01-01
A general model for job shop scheduling is described which applies to static, dynamic and non-deterministic production environments. Next, a Genetic Algorithm is presented which solves the job shop scheduling problem. This algorithm is tested in a dynamic environment under different workload situations. Thereby, a highly efficient decoding procedure is proposed which strongly improves the quality of schedules. Finally, this technique is tested for scheduling and rescheduling in a non-deterministic environment. It is shown by experiment that conventional methods of production control are clearly outperformed at reasonable run-time costs. PMID:10199993
Genetic Algorithms and Local Search
NASA Technical Reports Server (NTRS)
Whitley, Darrell
1996-01-01
The first part of this presentation is a tutorial level introduction to the principles of genetic search and models of simple genetic algorithms. The second half covers the combination of genetic algorithms with local search methods to produce hybrid genetic algorithms. Hybrid algorithms can be modeled within the existing theoretical framework developed for simple genetic algorithms. An application of a hybrid to geometric model matching is given. The hybrid algorithm yields results that improve on the current state-of-the-art for this problem.
NASA Astrophysics Data System (ADS)
Tang, Qiuhua; Li, Zixiang; Zhang, Liping; Floudas, C. A.; Cao, Xiaojun
2015-09-01
Due to the NP-hardness of the two-sided assembly line balancing (TALB) problem, multiple constraints existing in real applications are less studied, especially when one task is involved with several constraints. In this paper, an effective hybrid algorithm is proposed to address the TALB problem with multiple constraints (TALB-MC). Considering the discrete attribute of TALB-MC and the continuous attribute of the standard teaching-learning-based optimization (TLBO) algorithm, the random-keys method is hired in task permutation representation, for the purpose of bridging the gap between them. Subsequently, a special mechanism for handling multiple constraints is developed. In the mechanism, the directions constraint of each task is ensured by the direction check and adjustment. The zoning constraints and the synchronism constraints are satisfied by teasing out the hidden correlations among constraints. The positional constraint is allowed to be violated to some extent in decoding and punished in cost function. Finally, with the TLBO seeking for the global optimum, the variable neighborhood search (VNS) is further hybridized to extend the local search space. The experimental results show that the proposed hybrid algorithm outperforms the late acceptance hill-climbing algorithm (LAHC) for TALB-MC in most cases, especially for large-size problems with multiple constraints, and demonstrates well balance between the exploration and the exploitation. This research proposes an effective and efficient algorithm for solving TALB-MC problem by hybridizing the TLBO and VNS.
2015-01-01
Background Mass Spectrometry (MS) is a ubiquitous analytical tool in biological research and is used to measure the mass-to-charge ratio of bio-molecules. Peak detection is the essential first step in MS data analysis. Precise estimation of peak parameters such as peak summit location and peak area are critical to identify underlying bio-molecules and to estimate their abundances accurately. We propose a new method to detect and quantify peaks in mass spectra. It uses dual-tree complex wavelet transformation along with Stein's unbiased risk estimator for spectra smoothing. Then, a new method, based on the modified Asymmetric Pseudo-Voigt (mAPV) model and hierarchical particle swarm optimization, is used for peak parameter estimation. Results Using simulated data, we demonstrated the benefit of using the mAPV model over Gaussian, Lorentz and Bi-Gaussian functions for MS peak modelling. The proposed mAPV model achieved the best fitting accuracy for asymmetric peaks, with lower percentage errors in peak summit location estimation, which were 0.17% to 4.46% less than that of the other models. It also outperformed the other models in peak area estimation, delivering lower percentage errors, which were about 0.7% less than its closest competitor - the Bi-Gaussian model. In addition, using data generated from a MALDI-TOF computer model, we showed that the proposed overall algorithm outperformed the existing methods mainly in terms of sensitivity. It achieved a sensitivity of 85%, compared to 77% and 71% of the two benchmark algorithms, continuous wavelet transformation based method and Cromwell respectively. Conclusions The proposed algorithm is particularly useful for peak detection and parameter estimation in MS data with overlapping peak distributions and asymmetric peaks. The algorithm is implemented using MATLAB and the source code is freely available at http://mapv.sourceforge.net. PMID:26680279
Ensemble algorithms in reinforcement learning.
Wiering, Marco A; van Hasselt, Hado
2008-08-01
This paper describes several ensemble methods that combine multiple different reinforcement learning (RL) algorithms in a single agent. The aim is to enhance learning speed and final performance by combining the chosen actions or action probabilities of different RL algorithms. We designed and implemented four different ensemble methods combining the following five different RL algorithms: Q-learning, Sarsa, actor-critic (AC), QV-learning, and AC learning automaton. The intuitively designed ensemble methods, namely, majority voting (MV), rank voting, Boltzmann multiplication (BM), and Boltzmann addition, combine the policies derived from the value functions of the different RL algorithms, in contrast to previous work where ensemble methods have been used in RL for representing and learning a single value function. We show experiments on five maze problems of varying complexity; the first problem is simple, but the other four maze tasks are of a dynamic or partially observable nature. The results indicate that the BM and MV ensembles significantly outperform the single RL algorithms. PMID:18632380
Zarecki, Raphy; Oberhardt, Matthew A; Yizhak, Keren; Wagner, Allon; Shtifman Segal, Ella; Freilich, Shiri; Henry, Christopher S; Gophna, Uri; Ruppin, Eytan
2014-01-01
Growth rate has long been considered one of the most valuable phenotypes that can be measured in cells. Aside from being highly accessible and informative in laboratory cultures, maximal growth rate is often a prime determinant of cellular fitness, and predicting phenotypes that underlie fitness is key to both understanding and manipulating life. Despite this, current methods for predicting microbial fitness typically focus on yields [e.g., predictions of biomass yield using GEnome-scale metabolic Models (GEMs)] or notably require many empirical kinetic constants or substrate uptake rates, which render these methods ineffective in cases where fitness derives most directly from growth rate. Here we present a new method for predicting cellular growth rate, termed SUMEX, which does not require any empirical variables apart from a metabolic network (i.e., a GEM) and the growth medium. SUMEX is calculated by maximizing the SUM of molar EXchange fluxes (hence SUMEX) in a genome-scale metabolic model. SUMEX successfully predicts relative microbial growth rates across species, environments, and genetic conditions, outperforming traditional cellular objectives (most notably, the convention assuming biomass maximization). The success of SUMEX suggests that the ability of a cell to catabolize substrates and produce a strong proton gradient enables fast cell growth. Easily applicable heuristics for predicting growth rate, such as what we demonstrate with SUMEX, may contribute to numerous medical and biotechnological goals, ranging from the engineering of faster-growing industrial strains, modeling of mixed ecological communities, and the inhibition of cancer growth. PMID:24866123
Vaidya, Vishal S.; Ozer, Josef S.; Frank, Dieterle; Collings, Fitz B.; Ramirez, Victoria; Troth, Sean; Muniappa, Nagaraja; Thudium, Douglas; Gerhold, David; Holder, Daniel J.; Bobadilla, Norma A.; Marrer, Estelle; Perentes, Elias; Cordier, André; Vonderscher, Jacky; Maurer, Gérard; Goering, Peter L.; Sistare, Frank D.; Bonventre, Joseph V.
2010-01-01
Kidney toxicity accounts for a significant percentage of morbidity and drug candidate failure. Serum creatinine (SCr) and blood urea nitrogen (BUN) have been used to monitor kidney dysfunction for over a century but these markers are insensitive and non-specific. In multi-site preclinical rat toxicology studies the diagnostic performance of urinary kidney injury molecule-1 (Kim-1) was compared to traditional biomarkers as predictors of kidney tubular histopathologic changes, currently considered the “gold standard” of nephrotoxicity. In multiple models of kidney injury, urinary Kim-1 significantly outperformed SCr and BUN. The area under the receiver operating characteristic curve for Kim-1 was between 0.91 and 0.99 as compared to 0.79 to 0.9 for BUN and 0.73 to 0.85 for SCr. Thus urinary Kim-1 is the first injury biomarker of kidney toxicity qualified by the FDA and EMEA and is expected to significantly improve kidney safety monitoring. PMID:20458318
Hu, Jin; Wang, Guilin; Zhao, Wenguo; Liu, Xinyu; Zhang, Libin; Gao, Weiping
2016-07-01
Conjugating poly(ethylene glycol) (PEG), PEGylation, to therapeutic proteins is widely used as a means to improve their pharmacokinetics and therapeutic potential. One prime example is PEGylated interferon-alpha (PEGASYS). However, PEGylation usually leads to a heterogeneous mixture of positional isomers with reduced bioactivity and low yield. Herein, we report site-specific in situ growth (SIG) of a PEG-like polymer, poly(oligo(ethylene glycol) methyl ether methacrylate) (POEGMA), from the C-terminus of interferon-alpha to form a site-specific (C-terminal) and stoichiometric (1:1) POEGMA conjugate of interferon-alpha in high yield. The POEGMA conjugate showed significantly improved pharmacokinetics, tumor accumulation and anticancer efficacy as compared to interferon-alpha. Notably, the POEGMA conjugate possessed a 7.2-fold higher in vitro antiproliferative bioactivity than PEGASYS. More importantly, in a murine cancer model, the POEGMA conjugate completely inhibited tumor growth and eradicated tumors of 75% mice without appreciable systemic toxicity, whereas at the same dose, no mice treated with PEGASYS survived for over 58 days. The outperformance of a site-specific POEGMA conjugate prepared by SIG over PEGASYS that is the current gold standard for interferon-alpha delivery suggests that SIG is of interest for the development of next-generation protein therapeutics. PMID:27152679
Staiger, Christine; Cadot, Sidney; Györffy, Balázs; Wessels, Lodewyk F. A.; Klau, Gunnar W.
2013-01-01
Integrating gene expression data with secondary data such as pathway or protein-protein interaction data has been proposed as a promising approach for improved outcome prediction of cancer patients. Methods employing this approach usually aggregate the expression of genes into new composite features, while the secondary data guide this aggregation. Previous studies were limited to few data sets with a small number of patients. Moreover, each study used different data and evaluation procedures. This makes it difficult to objectively assess the gain in classification performance. Here we introduce the Amsterdam Classification Evaluation Suite (ACES). ACES is a Python package to objectively evaluate classification and feature-selection methods and contains methods for pooling and normalizing Affymetrix microarrays from different studies. It is simple to use and therefore facilitates the comparison of new approaches to best-in-class approaches. In addition to the methods described in our earlier study (Staiger et al., 2012), we have included two prominent prognostic gene signatures specific for breast cancer outcome, one more composite feature selection method and two network-based gene ranking methods. Employing the evaluation pipeline we show that current composite-feature classification methods do not outperform simple single-genes classifiers in predicting outcome in breast cancer. Furthermore, we find that also the stability of features across different data sets is not higher for composite features. Most stunningly, we observe that prediction performances are not affected when extracting features from randomized PPI networks. PMID:24391662
Synthesis of Greedy Algorithms Using Dominance Relations
NASA Technical Reports Server (NTRS)
Nedunuri, Srinivas; Smith, Douglas R.; Cook, William R.
2010-01-01
Greedy algorithms exploit problem structure and constraints to achieve linear-time performance. Yet there is still no completely satisfactory way of constructing greedy algorithms. For example, the Greedy Algorithm of Edmonds depends upon translating a problem into an algebraic structure called a matroid, but the existence of such a translation can be as hard to determine as the existence of a greedy algorithm itself. An alternative characterization of greedy algorithms is in terms of dominance relations, a well-known algorithmic technique used to prune search spaces. We demonstrate a process by which dominance relations can be methodically derived for a number of greedy algorithms, including activity selection, and prefix-free codes. By incorporating our approach into an existing framework for algorithm synthesis, we demonstrate that it could be the basis for an effective engineering method for greedy algorithms. We also compare our approach with other characterizations of greedy algorithms.
Pasquini, Sarah C; Wright, S Joseph; Santiago, Louis S
2015-07-01
always outperform trees, in terms of photosynthetic processes and under contrasting rates of resource supply of macronutrients, will allow lianas to increase in abundance if disturbance and tree turnover rates are increasing in Neotropical forests as has been suggested. PMID:26378309
2011-01-01
Background To identify the determinants of invasiveness, comparisons of traits of invasive and native species are commonly performed. Invasiveness is generally linked to higher values of reproductive, physiological and growth-related traits of the invasives relative to the natives in the introduced range. Phenotypic plasticity of these traits has also been cited to increase the success of invasive species but has been little studied in invasive tree species. In a greenhouse experiment, we compared ecophysiological traits between an invasive species to Europe, Acer negundo, and early- and late-successional co-occurring native species, under different light, nutrient availability and disturbance regimes. We also compared species of the same species groups in situ, in riparian forests. Results Under non-limiting resources, A. negundo seedlings showed higher growth rates than the native species. However, A. negundo displayed equivalent or lower photosynthetic capacities and nitrogen content per unit leaf area compared to the native species; these findings were observed both on the seedlings in the greenhouse experiment and on adult trees in situ. These physiological traits were mostly conservative along the different light, nutrient and disturbance environments. Overall, under non-limiting light and nutrient conditions, specific leaf area and total leaf area of A. negundo were substantially larger. The invasive species presented a higher plasticity in allocation to foliage and therefore in growth with increasing nutrient and light availability relative to the native species. Conclusions The higher level of plasticity of the invasive species in foliage allocation in response to light and nutrient availability induced a better growth in non-limiting resource environments. These results give us more elements on the invasiveness of A. negundo and suggest that such behaviour could explain the ability of A. negundo to outperform native tree species, contributes to its spread
Williams, Paul H.; Vaissière, Bernard E.; Zhou, Zhiyong; Gai, Qinbao; Dong, Jie; An, Jiandong
2015-01-01
Peach Prunus persica (L.) Batsch is self-compatible and largely self-fertile, but under greenhouse conditions pollinators must be introduced to achieve good fruit set and quality. Because little work has been done to assess the effectiveness of different pollinators on peach trees under greenhouse conditions, we studied ‘Okubo’ peach in greenhouse tunnels near Beijing between 2012 and 2014. We measured pollen deposition, pollen-tube growth rates, ovary development, and initial fruit set after the flowers were visited by either of two managed pollinators: bumblebees, Bombus patagiatus Nylander, and honeybees, Apis mellifera L. The results show that B. patagiatus is more effective than A. mellifera as a pollinator of peach in greenhouses because of differences in two processes. First, B. patagiatus deposits more pollen grains on peach stigmas than A. mellifera, both during a single visit and during a whole day of open pollination. Second, there are differences in the fertilization performance of the pollen deposited. Half of the flowers visited by B. patagiatus are fertilized 9–11 days after bee visits, while for flowers visited by A. mellifera, half are fertilized 13–15 days after bee visits. Consequently, fruit development is also accelerated by bumblebees, showing that the different pollinators have not only different pollination efficiency, but also influence the subsequent time course of fertilization and fruit set. Flowers visited by B. patagiatus show faster ovary growth and ultimately these flowers produce more fruit. Our work shows that pollinators may influence fruit production beyond the amount of pollen delivered. We show that managed indigenous bumblebees significantly outperform introduced honeybees in increasing peach initial fruit set under greenhouse conditions. PMID:25799170
Allon, Aliza A.; Butcher, Kristin; Schneider, Richard A.; Lotz, Jeffrey C.
2011-01-01
Study Design This study explores the use of bilaminar coculture pellets of mesenchymal stem cells (MSC) and Nucleus Pulposus cells (NPC) as a cell-based therapy for intervertebral disc regeneration. The pellets were tested under conditions that mimic the degenerative disc. Objective Our goal is to optimize our cell-based therapy in vitro under conditions representative of the eventual diseased tissue. Summary of Background Data Harnessing the potential of stem cells is an important strategy for regenerative medicine. Our approach seeks to direct the behavior of stem cells by mimicking embryonic processes underlying cartilage and intervertebral disc development. Prior experiments have shown that bilaminar co-culture can help differentiate MSC and substantially improve new matrix deposition. Methods We have designed a novel spherical bilaminar cell pellet (BCP) where MSC are enclosed in a shell of NPC. There were three groups: MSC, NPC, and BCP. The pellets were tested under three different culture conditions: in a bioreactor that provides pressure & hypoxia (mimicking normal disc conditions), with inflammatory cytokines (IL-1b and TNF-a), and a bioreactor with inflammation (mimicking painful disc conditions). Results When cultured in the bioreactor, the NPC pellets produced significantly more glycosaminoglycan (GAG)/cell than the other groups: 70-80% more than the BCP and MSC alone. When cultured in an inflammatory environment, the MSC and BCP groups produced 30-34% more GAG/cell than NPC (p<0.05). When the pellets were cultured in a bioreactor with inflammation, the BCP made 25% more GAG/cell than MSC and 57% more than NPC (p<0.05). Conclusion This study shows that BCP outperform controls in a simulated degenerated disc environment. Adapting inductive mechanisms from development to trigger differentiation and restore diseased tissue has many advantages. As opposed to strategies that require growth factor supplements or genetic manipulations, our method is self
NIMEFI: Gene Regulatory Network Inference using Multiple Ensemble Feature Importance Algorithms
Ruyssinck, Joeri; Huynh-Thu, Vân Anh; Geurts, Pierre; Dhaene, Tom; Demeester, Piet; Saeys, Yvan
2014-01-01
One of the long-standing open challenges in computational systems biology is the topology inference of gene regulatory networks from high-throughput omics data. Recently, two community-wide efforts, DREAM4 and DREAM5, have been established to benchmark network inference techniques using gene expression measurements. In these challenges the overall top performer was the GENIE3 algorithm. This method decomposes the network inference task into separate regression problems for each gene in the network in which the expression values of a particular target gene are predicted using all other genes as possible predictors. Next, using tree-based ensemble methods, an importance measure for each predictor gene is calculated with respect to the target gene and a high feature importance is considered as putative evidence of a regulatory link existing between both genes. The contribution of this work is twofold. First, we generalize the regression decomposition strategy of GENIE3 to other feature importance methods. We compare the performance of support vector regression, the elastic net, random forest regression, symbolic regression and their ensemble variants in this setting to the original GENIE3 algorithm. To create the ensemble variants, we propose a subsampling approach which allows us to cast any feature selection algorithm that produces a feature ranking into an ensemble feature importance algorithm. We demonstrate that the ensemble setting is key to the network inference task, as only ensemble variants achieve top performance. As second contribution, we explore the effect of using rankwise averaged predictions of multiple ensemble algorithms as opposed to only one. We name this approach NIMEFI (Network Inference using Multiple Ensemble Feature Importance algorithms) and show that this approach outperforms all individual methods in general, although on a specific network a single method can perform better. An implementation of NIMEFI has been made publicly available
Cassani, Raymundo; Falk, Tiago H.; Fraga, Francisco J.; Kanda, Paulo A. M.; Anghinah, Renato
2014-01-01
Over the last decade, electroencephalography (EEG) has emerged as a reliable tool for the diagnosis of cortical disorders such as Alzheimer's disease (AD). EEG signals, however, are susceptible to several artifacts, such as ocular, muscular, movement, and environmental. To overcome this limitation, existing diagnostic systems commonly depend on experienced clinicians to manually select artifact-free epochs from the collected multi-channel EEG data. Manual selection, however, is a tedious and time-consuming process, rendering the diagnostic system “semi-automated.” Notwithstanding, a number of EEG artifact removal algorithms have been proposed in the literature. The (dis)advantages of using such algorithms in automated AD diagnostic systems, however, have not been documented; this paper aims to fill this gap. Here, we investigate the effects of three state-of-the-art automated artifact removal (AAR) algorithms (both alone and in combination with each other) on AD diagnostic systems based on four different classes of EEG features, namely, spectral, amplitude modulation rate of change, coherence, and phase. The three AAR algorithms tested are statistical artifact rejection (SAR), blind source separation based on second order blind identification and canonical correlation analysis (BSS-SOBI-CCA), and wavelet enhanced independent component analysis (wICA). Experimental results based on 20-channel resting-awake EEG data collected from 59 participants (20 patients with mild AD, 15 with moderate-to-severe AD, and 24 age-matched healthy controls) showed the wICA algorithm alone outperforming other enhancement algorithm combinations across three tasks: diagnosis (control vs. mild vs. moderate), early detection (control vs. mild), and disease progression (mild vs. moderate), thus opening the doors for fully-automated systems that can assist clinicians with early detection of AD, as well as disease severity progression assessment. PMID:24723886
LBR: Load Balancing Routing Algorithm for Wireless Sensor Networks
NASA Astrophysics Data System (ADS)
Daabaj, Khaled; Dixon, Mike; Koziniec, Terry
2010-06-01
Homogeneous wireless sensor networks (WSNs) are organized using identical sensor nodes, but the nature of WSNs operations results in an imbalanced workload on gateway sensor nodes which may lead to a hot-spot or routing hole problem. The routing hole problem can be considered as a natural result of the tree-based routing schemes that are widely used in WSNs, where all nodes construct a multi-hop routing tree to a centralized root, e.g., a gateway or base station. For example, sensor nodes on the routing path and closer to the base station deplete their own energy faster than other nodes, or sensor nodes with the best link state to the base station are overloaded with traffic from the rest of the network and experience a faster energy depletion rate than their peers. Routing protocols for WSNs are reliability-oriented and their use of reliability metric to avoid unreliable links makes the routing scheme worse. However, none of these reliability oriented routing protocols explicitly uses load balancing in their routing schemes. Since improving network lifetime is a fundamental challenge of WSNs, we present, in this chapter, a novel, energy-wise, load balancing routing (LBR) algorithm that addresses load balancing in an energy efficient manner by maintaining a reliable set of parent nodes. This allows sensor nodes to quickly find a new parent upon parent loss due to the existing of node failure or energy hole. The proposed routing algorithm is tested using simulations and the results demonstrate that it outperforms the MultiHopLQI reliability based routing algorithm.
Improved multiprocessor garbage collection algorithms
Newman, I.A.; Stallard, R.P.; Woodward, M.C.
1983-01-01
Outlines the results of an investigation of existing multiprocessor garbage collection algorithms and introduces two new algorithms which significantly improve some aspects of the performance of their predecessors. The two algorithms arise from different starting assumptions. One considers the case where the algorithm will terminate successfully whatever list structure is being processed and assumes that the extra data space should be minimised. The other seeks a very fast garbage collection time for list structures that do not contain loops. Results of both theoretical and experimental investigations are given to demonstrate the efficacy of the algorithms. 7 references.
Focusing through a turbid medium by amplitude modulation with genetic algorithm
NASA Astrophysics Data System (ADS)
Dai, Weijia; Peng, Ligen; Shao, Xiaopeng
2014-05-01
Multiple scattering of light in opaque materials such as white paint and human tissue forms a volume speckle field, will greatly reduce the imaging depth and degrade the imaging quality. A novel approach is proposed to focus light through a turbid medium using amplitude modulation with genetic algorithm (GA) from speckle patterns. Compared with phase modulation method, amplitude modulation approach, in which the each element of spatial light modulator (SLM) is either zero or one, is much easier to achieve. Theoretical and experimental results show that, the advantage of GA is more suitable for low the signal to noise ratio (SNR) environments in comparison to the existing amplitude control algorithms such as binary amplitude modulation. The circular Gaussian distribution model and Rayleigh Sommerfeld diffraction theory are employed in our simulations to describe the turbid medium and light propagation between optical devices, respectively. It is demonstrated that the GA technique can achieve a higher overall enhancement, and converge much faster than others, and outperform all algorithms at high noise. Focusing through a turbid medium has potential in the observation of cells and protein molecules in biological tissues and other structures in micro/nano scale.
Does Unconscious Racism Exist?
ERIC Educational Resources Information Center
Quillian, Lincoln
2008-01-01
This essay argues for the existence of a form of unconscious racism. Research on implicit prejudice provides good evidence that most persons have deeply held negative associations with minority groups that can lead to subtle discrimination without conscious awareness. The evidence for implicit attitudes is briefly reviewed. Criticisms of the…
Understanding existing exposure situations.
Lecomte, J-F
2016-06-01
International Commission on Radiological Protection (ICRP) Publication 103 removed the distinction between practices and interventions, and introduced three types of exposure situation: existing, planned, and emergency. It also emphasised the optimisation principle in connection with individual dose restrictions for all controllable exposure situations. Existing exposure situations are those resulting from sources, natural or man-made, that already exist when a decision on control has to be taken. They have common features to be taken into account when implementing general recommendations, such as: the source may be difficult to control; all exposures cannot be anticipated; protective actions can only be implemented after characterisation of the exposure situation; time may be needed to reduce exposure below the reference level; levels of exposure are highly dependent on individual behaviour and present a wide spread of individual dose distribution; exposures at work may be adventitious and not considered as occupational exposure; there is generally no potential for accident; many stakeholders have to be involved; and many factors need to be considered. ICRP is currently developing a series of reports related to the practical implementation of Publication 103 to various existing exposure situations, including exposure from radon, exposure from cosmic radiation in aviation, exposure from processes using naturally occurring radioactive material, and exposure from contaminated sites due to past activities. PMID:26975365
Ju, Zhe; Gu, Hong
2016-08-15
As one important post-translational modification of prokaryotic proteins, pupylation plays a key role in regulating various biological processes. The accurate identification of pupylation sites is crucial for understanding the underlying mechanisms of pupylation. Although several computational methods have been developed for the identification of pupylation sites, the prediction accuracy of them is still unsatisfactory. Here, a novel bioinformatics tool named IMP-PUP is proposed to improve the prediction of pupylation sites. IMP-PUP is constructed on the composition of k-spaced amino acid pairs and trained with a modified semi-supervised self-training support vector machine (SVM) algorithm. The proposed algorithm iteratively trains a series of support vector machine classifiers on both annotated and non-annotated pupylated proteins. Computational results show that IMP-PUP achieves the area under receiver operating characteristic curves of 0.91, 0.73, and 0.75 on our training set, Tung's testing set, and our testing set, respectively, which are better than those of the different error costs SVM algorithm and the original self-training SVM algorithm. Independent tests also show that IMP-PUP significantly outperforms three other existing pupylation site predictors: GPS-PUP, iPUP, and pbPUP. Therefore, IMP-PUP can be a useful tool for accurate prediction of pupylation sites. A MATLAB software package for IMP-PUP is available at https://juzhe1120.github.io/. PMID:27197054
FHSA-SED: Two-Locus Model Detection for Genome-Wide Association Study with Harmony Search Algorithm
Tuo, Shouheng; Zhang, Junying; Yuan, Xiguo; Zhang, Yuanyuan; Liu, Zhaowen
2016-01-01
Motivation Two-locus model is a typical significant disease model to be identified in genome-wide association study (GWAS). Due to intensive computational burden and diversity of disease models, existing methods have drawbacks on low detection power, high computation cost, and preference for some types of disease models. Method In this study, two scoring functions (Bayesian network based K2-score and Gini-score) are used for characterizing two SNP locus as a candidate model, the two criteria are adopted simultaneously for improving identification power and tackling the preference problem to disease models. Harmony search algorithm (HSA) is improved for quickly finding the most likely candidate models among all two-locus models, in which a local search algorithm with two-dimensional tabu table is presented to avoid repeatedly evaluating some disease models that have strong marginal effect. Finally G-test statistic is used to further test the candidate models. Results We investigate our method named FHSA-SED on 82 simulated datasets and a real AMD dataset, and compare it with two typical methods (MACOED and CSE) which have been developed recently based on swarm intelligent search algorithm. The results of simulation experiments indicate that our method outperforms the two compared algorithms in terms of detection power, computation time, evaluation times, sensitivity (TPR), specificity (SPC), positive predictive value (PPV) and accuracy (ACC). Our method has identified two SNPs (rs3775652 and rs10511467) that may be also associated with disease in AMD dataset. PMID:27014873
Existence of hyperbolic calorons
Sibner, Lesley; Sibner, Robert; Yang, Yisong
2015-01-01
Recent work of Harland shows that the SO(3)-symmetric, dimensionally reduced, charge-N self-dual Yang–Mills calorons on the hyperbolic space H3×S1 may be obtained through constructing N-vortex solutions of an Abelian Higgs model as in the study of Witten on multiple instantons. In this paper, we establish the existence of such minimal action charge-N calorons by constructing arbitrarily prescribed N-vortex solutions of the Witten type equations.
A novel bit-quad-based Euler number computing algorithm.
Yao, Bin; He, Lifeng; Kang, Shiying; Chao, Yuyan; Zhao, Xiao
2015-01-01
The Euler number of a binary image is an important topological property in computer vision and pattern recognition. This paper proposes a novel bit-quad-based Euler number computing algorithm. Based on graph theory and analysis on bit-quad patterns, our algorithm only needs to count two bit-quad patterns. Moreover, by use of the information obtained during processing the previous bit-quad, the average number of pixels to be checked for processing a bit-quad is only 1.75. Experimental results demonstrated that our method outperforms significantly conventional Euler number computing algorithms. PMID:26636023
Parallel processors and nonlinear structural dynamics algorithms and software
NASA Technical Reports Server (NTRS)
Belytschko, Ted; Gilbertsen, Noreen D.; Neal, Mark O.; Plaskacz, Edward J.
1989-01-01
The adaptation of a finite element program with explicit time integration to a massively parallel SIMD (single instruction multiple data) computer, the CONNECTION Machine is described. The adaptation required the development of a new algorithm, called the exchange algorithm, in which all nodal variables are allocated to the element with an exchange of nodal forces at each time step. The architectural and C* programming language features of the CONNECTION Machine are also summarized. Various alternate data structures and associated algorithms for nonlinear finite element analysis are discussed and compared. Results are presented which demonstrate that the CONNECTION Machine is capable of outperforming the CRAY XMP/14.
An efficient cuckoo search algorithm for numerical function optimization
NASA Astrophysics Data System (ADS)
Ong, Pauline; Zainuddin, Zarita
2013-04-01
Cuckoo search algorithm which reproduces the breeding strategy of the best known brood parasitic bird, the cuckoos has demonstrated its superiority in obtaining the global solution for numerical optimization problems. However, the involvement of fixed step approach in its exploration and exploitation behavior might slow down the search process considerably. In this regards, an improved cuckoo search algorithm with adaptive step size adjustment is introduced and its feasibility on a variety of benchmarks is validated. The obtained results show that the proposed scheme outperforms the standard cuckoo search algorithm in terms of convergence characteristic while preserving the fascinating features of the original method.
Parallel algorithms for unconstrained optimizations by multisplitting
He, Qing
1994-12-31
In this paper a new parallel iterative algorithm for unconstrained optimization using the idea of multisplitting is proposed. This algorithm uses the existing sequential algorithms without any parallelization. Some convergence and numerical results for this algorithm are presented. The experiments are performed on an Intel iPSC/860 Hyper Cube with 64 nodes. It is interesting that the sequential implementation on one node shows that if the problem is split properly, the algorithm converges much faster than one without splitting.
Sequence-Specific Copolymer Compatibilizers designed via a Genetic Algorithm
NASA Astrophysics Data System (ADS)
Meenakshisundaram, Venkatesh; Patra, Tarak; Hung, Jui-Hsiang; Simmons, David
For several decades, block copolymers have been employed as surfactants to reduce interfacial energy for applications from emulsification to surface adhesion. While the simplest approach employs symmetric diblocks, studies have examined asymmetric diblocks, multiblock copolymers, gradient copolymers, and copolymer-grafted nanoparticles. However, there exists no established approach to determining the optimal copolymer compatibilizer sequence for a given application. Here we employ molecular dynamics simulations within a genetic algorithm to identify copolymer surfactant sequences yielding maximum reductions the interfacial energy of model immiscible polymers. The optimal copolymer sequence depends significantly on surfactant concentration. Most surprisingly, at high surface concentrations, where the surfactant achieves the greatest interfacial energy reduction, specific non-periodic sequences are found to significantly outperform any regularly blocky sequence. This emergence of polymer sequence-specificity within a non-sequenced environment adds to a recent body of work suggesting that specific sequence may have the potential to play a greater role in polymer properties than previously understood. We acknowledge the W. M. Keck Foundation for financial support of this research.
Constructive neural network learning algorithms
Parekh, R.; Yang, Jihoon; Honavar, V.
1996-12-31
Constructive Algorithms offer an approach for incremental construction of potentially minimal neural network architectures for pattern classification tasks. These algorithms obviate the need for an ad-hoc a-priori choice of the network topology. The constructive algorithm design involves alternately augmenting the existing network topology by adding one or more threshold logic units and training the newly added threshold neuron(s) using a stable variant of the perception learning algorithm (e.g., pocket algorithm, thermal perception, and barycentric correction procedure). Several constructive algorithms including tower, pyramid, tiling, upstart, and perception cascade have been proposed for 2-category pattern classification. These algorithms differ in terms of their topological and connectivity constraints as well as the training strategies used for individual neurons.
Robustness of Tree Extraction Algorithms from LIDAR
NASA Astrophysics Data System (ADS)
Dumitru, M.; Strimbu, B. M.
2015-12-01
Forest inventory faces a new era as unmanned aerial systems (UAS) increased the precision of measurements, while reduced field effort and price of data acquisition. A large number of algorithms were developed to identify various forest attributes from UAS data. The objective of the present research is to assess the robustness of two types of tree identification algorithms when UAS data are combined with digital elevation models (DEM). The algorithms use as input photogrammetric point cloud, which are subsequent rasterized. The first type of algorithms associate tree crown with an inversed watershed (subsequently referred as watershed based), while the second type is based on simultaneous representation of tree crown as an individual entity, and its relation with neighboring crowns (subsequently referred as simultaneous representation). A DJI equipped with a SONY a5100 was used to acquire images over an area from center Louisiana. The images were processed with Pix4D, and a photogrammetric point cloud with 50 points / m2 was attained. DEM was obtained from a flight executed in 2013, which also supplied a LIDAR point cloud with 30 points/m2. The algorithms were tested on two plantations with different species and crown class complexities: one homogeneous (i.e., a mature loblolly pine plantation), and one heterogeneous (i.e., an unmanaged uneven-aged stand with mixed species pine -hardwoods). Tree identification on photogrammetric point cloud reveled that simultaneous representation algorithm outperforms watershed algorithm, irrespective stand complexity. Watershed algorithm exhibits robustness to parameters, but the results were worse than majority sets of parameters needed by the simultaneous representation algorithm. The simultaneous representation algorithm is a better alternative to watershed algorithm even when parameters are not accurately estimated. Similar results were obtained when the two algorithms were run on the LIDAR point cloud.
High-performance combinatorial algorithms
Pinar, Ali
2003-10-31
Combinatorial algorithms have long played an important role in many applications of scientific computing such as sparse matrix computations and parallel computing. The growing importance of combinatorial algorithms in emerging applications like computational biology and scientific data mining calls for development of a high performance library for combinatorial algorithms. Building such a library requires a new structure for combinatorial algorithms research that enables fast implementation of new algorithms. We propose a structure for combinatorial algorithms research that mimics the research structure of numerical algorithms. Numerical algorithms research is nicely complemented with high performance libraries, and this can be attributed to the fact that there are only a small number of fundamental problems that underlie numerical solvers. Furthermore there are only a handful of kernels that enable implementation of algorithms for these fundamental problems. Building a similar structure for combinatorial algorithms will enable efficient implementations for existing algorithms and fast implementation of new algorithms. Our results will promote utilization of combinatorial techniques and will impact research in many scientific computing applications, some of which are listed.
NASA Astrophysics Data System (ADS)
Abrams, Daniel S.
This thesis describes several new quantum algorithms. These include a polynomial time algorithm that uses a quantum fast Fourier transform to find eigenvalues and eigenvectors of a Hamiltonian operator, and that can be applied in cases (commonly found in ab initio physics and chemistry problems) for which all known classical algorithms require exponential time. Fast algorithms for simulating many body Fermi systems are also provided in both first and second quantized descriptions. An efficient quantum algorithm for anti-symmetrization is given as well as a detailed discussion of a simulation of the Hubbard model. In addition, quantum algorithms that calculate numerical integrals and various characteristics of stochastic processes are described. Two techniques are given, both of which obtain an exponential speed increase in comparison to the fastest known classical deterministic algorithms and a quadratic speed increase in comparison to classical Monte Carlo (probabilistic) methods. I derive a simpler and slightly faster version of Grover's mean algorithm, show how to apply quantum counting to the problem, develop some variations of these algorithms, and show how both (apparently distinct) approaches can be understood from the same unified framework. Finally, the relationship between physics and computation is explored in some more depth, and it is shown that computational complexity theory depends very sensitively on physical laws. In particular, it is shown that nonlinear quantum mechanics allows for the polynomial time solution of NP-complete and #P oracle problems. Using the Weinberg model as a simple example, the explicit construction of the necessary gates is derived from the underlying physics. Nonlinear quantum algorithms are also presented using Polchinski type nonlinearities which do not allow for superluminal communication. (Copies available exclusively from MIT Libraries, Rm. 14- 0551, Cambridge, MA 02139-4307. Ph. 617-253-5668; Fax 617-253-1690.)
Improved hybrid optimization algorithm for 3D protein structure prediction.
Zhou, Changjun; Hou, Caixia; Wei, Xiaopeng; Zhang, Qiang
2014-07-01
A new improved hybrid optimization algorithm - PGATS algorithm, which is based on toy off-lattice model, is presented for dealing with three-dimensional protein structure prediction problems. The algorithm combines the particle swarm optimization (PSO), genetic algorithm (GA), and tabu search (TS) algorithms. Otherwise, we also take some different improved strategies. The factor of stochastic disturbance is joined in the particle swarm optimization to improve the search ability; the operations of crossover and mutation that are in the genetic algorithm are changed to a kind of random liner method; at last tabu search algorithm is improved by appending a mutation operator. Through the combination of a variety of strategies and algorithms, the protein structure prediction (PSP) in a 3D off-lattice model is achieved. The PSP problem is an NP-hard problem, but the problem can be attributed to a global optimization problem of multi-extremum and multi-parameters. This is the theoretical principle of the hybrid optimization algorithm that is proposed in this paper. The algorithm combines local search and global search, which overcomes the shortcoming of a single algorithm, giving full play to the advantage of each algorithm. In the current universal standard sequences, Fibonacci sequences and real protein sequences are certified. Experiments show that the proposed new method outperforms single algorithms on the accuracy of calculating the protein sequence energy value, which is proved to be an effective way to predict the structure of proteins. PMID:25069136
NASA Astrophysics Data System (ADS)
Ubertini, Pietro; Sidoli, L.; Sguera, V.; Bazzano, A.
2009-12-01
Supergiant Fast X-ray Transients (SFXTs) are one of the most interesting (and unexpected) results of the INTEGRAL mission. They are a new class of HMXBs displaying short hard X-ray outbursts (duration less tha a day) characterized by fast flares (few hours timescale) and large dinamic range (10E3-10E4). The physical mechanism driving their peculiar behaviour is still unclear and highly debated: some models involve the structure of the supergiant companion donor wind (likely clumpy, in a spherical or non spherical geometry) and the orbital properties (wide separation with eccentric or circular orbit), while others involve the properties of the neutron star compact object and invoke very low magnetic field values (B < 1E10 G) or alternatively very high (B>1E14 G, magnetars). The picture is still highly unclear from the observational point of view as well: no cyclotron lines have been detected in the spectra, thus the strength of the neutron star magnetic field is unknown. Orbital periods have been measured in only 4 systems, spanning from 3.3 days to 165 days. Even the duty cycle seems to be quite different from source to source. The Energetic X-ray Imaging Survey Telescope (EXIST), with its hard X-ray all-sky survey and large improved limiting sensitivity, will allow us to get a clearer picture of SFXTs. A complete census of their number is essential to enlarge the sample. A long term and continuous as possible X-ray monitoring is crucial to -(1) obtain the duty cycle, -(2 )investigate their unknown orbital properties (separation, orbital period, eccentricity),- (3) to completely cover the whole outburst activity, (4)-to search for cyclotron lines in the high energy spectra. EXIST observations will provide crucial informations to test the different models and shed light on the peculiar behaviour of SFXTs.
Fast ordering algorithm for exact histogram specification.
Nikolova, Mila; Steidl, Gabriele
2014-12-01
This paper provides a fast algorithm to order in a meaningful, strict way the integer gray values in digital (quantized) images. It can be used in any exact histogram specification-based application. Our algorithm relies on the ordering procedure based on the specialized variational approach. This variational method was shown to be superior to all other state-of-the art ordering algorithms in terms of faithful total strict ordering but not in speed. Indeed, the relevant functionals are in general difficult to minimize because their gradient is nearly flat over vast regions. In this paper, we propose a simple and fast fixed point algorithm to minimize these functionals. The fast convergence of our algorithm results from known analytical properties of the model. Our algorithm is equivalent to an iterative nonlinear filtering. Furthermore, we show that a particular form of the variational model gives rise to much faster convergence than other alternative forms. We demonstrate that only a few iterations of this filter yield almost the same pixel ordering as the minimizer. Thus, we apply only few iteration steps to obtain images, whose pixels can be ordered in a strict and faithful way. Numerical experiments confirm that our algorithm outperforms by far its main competitors. PMID:25347881
Sobel, E.; Lange, K.; O`Connell, J.R.
1996-12-31
Haplotyping is the logical process of inferring gene flow in a pedigree based on phenotyping results at a small number of genetic loci. This paper formalizes the haplotyping problem and suggests four algorithms for haplotype reconstruction. These algorithms range from exhaustive enumeration of all haplotype vectors to combinatorial optimization by simulated annealing. Application of the algorithms to published genetic analyses shows that manual haplotyping is often erroneous. Haplotyping is employed in screening pedigrees for phenotyping errors and in positional cloning of disease genes from conserved haplotypes in population isolates. 26 refs., 6 figs., 3 tabs.
Joint optimization of algorithmic suites for EEG analysis.
Santana, Eder; Brockmeier, Austin J; Principe, Jose C
2014-01-01
Electroencephalogram (EEG) data analysis algorithms consist of multiple processing steps each with a number of free parameters. A joint optimization methodology can be used as a wrapper to fine-tune these parameters for the patient or application. This approach is inspired by deep learning neural network models, but differs because the processing layers for EEG are heterogeneous with different approaches used for processing space and time. Nonetheless, we treat the processing stages as a neural network and apply backpropagation to jointly optimize the parameters. This approach outperforms previous results on the BCI Competition II - dataset IV; additionally, it outperforms the common spatial patterns (CSP) algorithm on the BCI Competition III dataset IV. In addition, the optimized parameters in the architecture are still interpretable. PMID:25570621
Kwarciak, Kamil; Radom, Marcin; Formanowicz, Piotr
2016-04-01
The classical sequencing by hybridization takes into account a binary information about sequence composition. A given element from an oligonucleotide library is or is not a part of the target sequence. However, the DNA chip technology has been developed and it enables to receive a partial information about multiplicity of each oligonucleotide the analyzed sequence consist of. Currently, it is not possible to assess the exact data of such type but even partial information should be very useful. Two realistic multiplicity information models are taken into consideration in this paper. The first one, called "one and many" assumes that it is possible to obtain information if a given oligonucleotide occurs in a reconstructed sequence once or more than once. According to the second model, called "one, two and many", one is able to receive from biochemical experiment information if a given oligonucleotide is present in an analyzed sequence once, twice or at least three times. An ant colony optimization algorithm has been implemented to verify the above models and to compare with existing algorithms for sequencing by hybridization which utilize the additional information. The proposed algorithm solves the problem with any kind of hybridization errors. Computational experiment results confirm that using even the partial information about multiplicity leads to increased quality of reconstructed sequences. Moreover, they also show that the more precise model enables to obtain better solutions and the ant colony optimization algorithm outperforms the existing ones. Test data sets and the proposed ant colony optimization algorithm are available on: http://bioserver.cs.put.poznan.pl/download/ACO4mSBH.zip. PMID:26878124
A modified genetic algorithm with fuzzy roulette wheel selection for job-shop scheduling problems
NASA Astrophysics Data System (ADS)
Thammano, Arit; Teekeng, Wannaporn
2015-05-01
The job-shop scheduling problem is one of the most difficult production planning problems. Since it is in the NP-hard class, a recent trend in solving the job-shop scheduling problem is shifting towards the use of heuristic and metaheuristic algorithms. This paper proposes a novel metaheuristic algorithm, which is a modification of the genetic algorithm. This proposed algorithm introduces two new concepts to the standard genetic algorithm: (1) fuzzy roulette wheel selection and (2) the mutation operation with tabu list. The proposed algorithm has been evaluated and compared with several state-of-the-art algorithms in the literature. The experimental results on 53 JSSPs show that the proposed algorithm is very effective in solving the combinatorial optimization problems. It outperforms all state-of-the-art algorithms on all benchmark problems in terms of the ability to achieve the optimal solution and the computational time.
Exploration of new multivariate spectral calibration algorithms.
Van Benthem, Mark Hilary; Haaland, David Michael; Melgaard, David Kennett; Martin, Laura Elizabeth; Wehlburg, Christine Marie; Pell, Randy J.; Guenard, Robert D.
2004-03-01
A variety of multivariate calibration algorithms for quantitative spectral analyses were investigated and compared, and new algorithms were developed in the course of this Laboratory Directed Research and Development project. We were able to demonstrate the ability of the hybrid classical least squares/partial least squares (CLSIPLS) calibration algorithms to maintain calibrations in the presence of spectrometer drift and to transfer calibrations between spectrometers from the same or different manufacturers. These methods were found to be as good or better in prediction ability as the commonly used partial least squares (PLS) method. We also present the theory for an entirely new class of algorithms labeled augmented classical least squares (ACLS) methods. New factor selection methods are developed and described for the ACLS algorithms. These factor selection methods are demonstrated using near-infrared spectra collected from a system of dilute aqueous solutions. The ACLS algorithm is also shown to provide improved ease of use and better prediction ability than PLS when transferring calibrations between near-infrared calibrations from the same manufacturer. Finally, simulations incorporating either ideal or realistic errors in the spectra were used to compare the prediction abilities of the new ACLS algorithm with that of PLS. We found that in the presence of realistic errors with non-uniform spectral error variance across spectral channels or with spectral errors correlated between frequency channels, ACLS methods generally out-performed the more commonly used PLS method. These results demonstrate the need for realistic error structure in simulations when the prediction abilities of various algorithms are compared. The combination of equal or superior prediction ability and the ease of use of the ACLS algorithms make the new ACLS methods the preferred algorithms to use for multivariate spectral calibrations.
A memetic algorithm for enhancing the robustness of scale-free networks against malicious attacks
NASA Astrophysics Data System (ADS)
Zhou, Mingxing; Liu, Jing
2014-09-01
The robustness of the infrastructure of various real-life systems, which can be represented by networks and manifests the scale-free property, is of great importance. Thus, in this paper, a new memetic algorithm (MA), which is a type of effective optimization method combining both global and local searches, is proposed to enhance the robustness of scale-free (RSF) networks against malicious attacks (MA) without changing the degree distribution. The proposed algorithm is abbreviated as MA-RSF MA. Especially, with the intrinsic properties of the problem of optimizing network structure in mind, a crossover operator which can perform global search and a local search operator are designed. In the experiments, both synthetic scale-free networks and real-world networks, like the EU power grid network and the real Internet at the level of autonomous system (AS), are used. MA-RSFMA shows a strong ability in searching for the most robust network structure, and clearly outperforms existing local search methods.
A Hybrid alldifferent-Tabu Search Algorithm for Solving Sudoku Puzzles.
Soto, Ricardo; Crawford, Broderick; Galleguillos, Cristian; Paredes, Fernando; Norero, Enrique
2015-01-01
The Sudoku problem is a well-known logic-based puzzle of combinatorial number-placement. It consists in filling a n(2) × n(2) grid, composed of n columns, n rows, and n subgrids, each one containing distinct integers from 1 to n(2). Such a puzzle belongs to the NP-complete collection of problems, to which there exist diverse exact and approximate methods able to solve it. In this paper, we propose a new hybrid algorithm that smartly combines a classic tabu search procedure with the alldifferent global constraint from the constraint programming world. The alldifferent constraint is known to be efficient for domain filtering in the presence of constraints that must be pairwise different, which are exactly the kind of constraints that Sudokus own. This ability clearly alleviates the work of the tabu search, resulting in a faster and more robust approach for solving Sudokus. We illustrate interesting experimental results where our proposed algorithm outperforms the best results previously reported by hybrids and approximate methods. PMID:26078751
Tumor stratification by a novel graph-regularized bi-clique finding algorithm.
Ahmadi Adl, Amin; Qian, Xiaoning
2015-08-01
Due to involved disease mechanisms, many complex diseases such as cancer, demonstrate significant heterogeneity with varying behaviors, including different survival time, treatment responses, and recurrence rates. The aim of tumor stratification is to identify disease subtypes, which is an important first step towards precision medicine. Recent advances in profiling a large number of molecular variables such as in The Cancer Genome Atlas (TCGA), have enabled researchers to implement computational methods, including traditional clustering and bi-clustering algorithms, to systematically analyze high-throughput molecular measurements to identify tumor subtypes as well as their corresponding associated biomarkers. In this study we discuss critical issues and challenges in existing computational approaches for tumor stratification. We show that the problem can be formulated as finding densely connected sub-graphs (bi-cliques) in a bipartite graph representation of genomic data. We propose a novel algorithm that takes advantage of prior biology knowledge through a gene-gene interaction network to find such sub-graphs, which helps simultaneously identify both tumor subtypes and their corresponding genetic markers. Our experimental results show that our proposed method outperforms current state-of-the-art methods for tumor stratification. PMID:25791318
A Hybrid alldifferent-Tabu Search Algorithm for Solving Sudoku Puzzles
Crawford, Broderick; Paredes, Fernando; Norero, Enrique
2015-01-01
The Sudoku problem is a well-known logic-based puzzle of combinatorial number-placement. It consists in filling a n2 × n2 grid, composed of n columns, n rows, and n subgrids, each one containing distinct integers from 1 to n2. Such a puzzle belongs to the NP-complete collection of problems, to which there exist diverse exact and approximate methods able to solve it. In this paper, we propose a new hybrid algorithm that smartly combines a classic tabu search procedure with the alldifferent global constraint from the constraint programming world. The alldifferent constraint is known to be efficient for domain filtering in the presence of constraints that must be pairwise different, which are exactly the kind of constraints that Sudokus own. This ability clearly alleviates the work of the tabu search, resulting in a faster and more robust approach for solving Sudokus. We illustrate interesting experimental results where our proposed algorithm outperforms the best results previously reported by hybrids and approximate methods. PMID:26078751
A sparse reconstruction algorithm for ultrasonic images in nondestructive testing.
Guarneri, Giovanni Alfredo; Pipa, Daniel Rodrigues; Neves Junior, Flávio; de Arruda, Lúcia Valéria Ramos; Zibetti, Marcelo Victor Wüst
2015-01-01
Ultrasound imaging systems (UIS) are essential tools in nondestructive testing (NDT). In general, the quality of images depends on two factors: system hardware features and image reconstruction algorithms. This paper presents a new image reconstruction algorithm for ultrasonic NDT. The algorithm reconstructs images from A-scan signals acquired by an ultrasonic imaging system with a monostatic transducer in pulse-echo configuration. It is based on regularized least squares using a l1 regularization norm. The method is tested to reconstruct an image of a point-like reflector, using both simulated and real data. The resolution of reconstructed image is compared with four traditional ultrasonic imaging reconstruction algorithms: B-scan, SAFT, ω-k SAFT and regularized least squares (RLS). The method demonstrates significant resolution improvement when compared with B-scan-about 91% using real data. The proposed scheme also outperforms traditional algorithms in terms of signal-to-noise ratio (SNR). PMID:25905700
A Sparse Reconstruction Algorithm for Ultrasonic Images in Nondestructive Testing
Guarneri, Giovanni Alfredo; Pipa, Daniel Rodrigues; Junior, Flávio Neves; de Arruda, Lúcia Valéria Ramos; Zibetti, Marcelo Victor Wüst
2015-01-01
Ultrasound imaging systems (UIS) are essential tools in nondestructive testing (NDT). In general, the quality of images depends on two factors: system hardware features and image reconstruction algorithms. This paper presents a new image reconstruction algorithm for ultrasonic NDT. The algorithm reconstructs images from A-scan signals acquired by an ultrasonic imaging system with a monostatic transducer in pulse-echo configuration. It is based on regularized least squares using a l1 regularization norm. The method is tested to reconstruct an image of a point-like reflector, using both simulated and real data. The resolution of reconstructed image is compared with four traditional ultrasonic imaging reconstruction algorithms: B-scan, SAFT, ω-k SAFT and regularized least squares (RLS). The method demonstrates significant resolution improvement when compared with B-scan—about 91% using real data. The proposed scheme also outperforms traditional algorithms in terms of signal-to-noise ratio (SNR). PMID:25905700
Feature Selection via Modified Gravitational Optimization Algorithm
NASA Astrophysics Data System (ADS)
Nabizadeh, Nooshin; John, Nigel
2015-03-01
Feature selection is the process of selecting a subset of relevant and most informative features, which efficiently represents the input data. We proposed a feature selection algorithm based on n-dimensional gravitational optimization algorithm (NGOA), which is based on the principle of gravitational fields. The objective function of optimization algorithm is a non-linear function of variables, which are called masses and defined based on extracted features. The forces between the masses as well as their new locations are calculated using the value of the objective function and the values of masses. We extracted variety of features applying different wavelet transforms and statistical methods on FLAIR and T1-weighted MR brain images. There are two classes of normal and abnormal tissues. Extracted features are divided into groups of five features. The best feature is selected in each group using N-dimensional gravitational optimization algorithm and support vector machine classifier. Then the selected features from each group make several groups of five features again and so on till desired number of features is selected. The advantage of NGOA algorithm is that the possibility of being drawn into a local optimal solution is very low. The experimental results show that our method outperforms some standard feature selection algorithms on both real-data and simulated brain tumor data.
Guo, Hao; Fu, Jing
2013-01-01
Facility location, inventory control, and vehicle routes scheduling are critical and highly related problems in the design of logistics system for e-business. Meanwhile, the return ratio in Internet sales was significantly higher than in the traditional business. Many of returned merchandise have no quality defects, which can reenter sales channels just after a simple repackaging process. Focusing on the existing problem in e-commerce logistics system, we formulate a location-inventory-routing problem model with no quality defects returns. To solve this NP-hard problem, an effective hybrid genetic simulated annealing algorithm (HGSAA) is proposed. Results of numerical examples show that HGSAA outperforms GA on computing time, optimal solution, and computing stability. The proposed model is very useful to help managers make the right decisions under e-supply chain environment. PMID:24489489
Li, Yanhui; Guo, Hao; Wang, Lin; Fu, Jing
2013-01-01
Facility location, inventory control, and vehicle routes scheduling are critical and highly related problems in the design of logistics system for e-business. Meanwhile, the return ratio in Internet sales was significantly higher than in the traditional business. Many of returned merchandise have no quality defects, which can reenter sales channels just after a simple repackaging process. Focusing on the existing problem in e-commerce logistics system, we formulate a location-inventory-routing problem model with no quality defects returns. To solve this NP-hard problem, an effective hybrid genetic simulated annealing algorithm (HGSAA) is proposed. Results of numerical examples show that HGSAA outperforms GA on computing time, optimal solution, and computing stability. The proposed model is very useful to help managers make the right decisions under e-supply chain environment. PMID:24489489
Wright, April M.; Hillis, David M.
2014-01-01
Despite the introduction of likelihood-based methods for estimating phylogenetic trees from phenotypic data, parsimony remains the most widely-used optimality criterion for building trees from discrete morphological data. However, it has been known for decades that there are regions of solution space in which parsimony is a poor estimator of tree topology. Numerous software implementations of likelihood-based models for the estimation of phylogeny from discrete morphological data exist, especially for the Mk model of discrete character evolution. Here we explore the efficacy of Bayesian estimation of phylogeny, using the Mk model, under conditions that are commonly encountered in paleontological studies. Using simulated data, we describe the relative performances of parsimony and the Mk model under a range of realistic conditions that include common scenarios of missing data and rate heterogeneity. PMID:25279853
A hybrid artificial bee colony algorithm for numerical function optimization
NASA Astrophysics Data System (ADS)
Alqattan, Zakaria N.; Abdullah, Rosni
2015-02-01
Artificial Bee Colony (ABC) algorithm is one of the swarm intelligence algorithms; it has been introduced by Karaboga in 2005. It is a meta-heuristic optimization search algorithm inspired from the intelligent foraging behavior of the honey bees in nature. Its unique search process made it as one of the most competitive algorithm with some other search algorithms in the area of optimization, such as Genetic algorithm (GA) and Particle Swarm Optimization (PSO). However, the ABC performance of the local search process and the bee movement or the solution improvement equation still has some weaknesses. The ABC is good in avoiding trapping at the local optimum but it spends its time searching around unpromising random selected solutions. Inspired by the PSO, we propose a Hybrid Particle-movement ABC algorithm called HPABC, which adapts the particle movement process to improve the exploration of the original ABC algorithm. Numerical benchmark functions were used in order to experimentally test the HPABC algorithm. The results illustrate that the HPABC algorithm can outperform the ABC algorithm in most of the experiments (75% better in accuracy and over 3 times faster).
Kernel simplex growing algorithm for hyperspectral endmember extraction
NASA Astrophysics Data System (ADS)
Zhao, Liaoying; Zheng, Junpeng; Li, Xiaorun; Wang, Lijiao
2014-01-01
In order to effectively extract endmembers for hyperspectral imagery where linear mixing model may not be appropriate due to multiple scattering effects, this paper extends the simplex growing algorithm (SGA) to its kernel version. A new simplex volume formula without dimension reduction is used in SGA to form a new simplex growing algorithm (NSGA). The original data are nonlinearly mapped into a high-dimensional space where the scatters can be ignored. To avoid determining complex nonlinear mapping, a kernel function is used to extend the NSGA to kernel NSGA (KNSGA). Experimental results of simulated and real data prove that the proposed KNSGA approach outperforms SGA and NSGA.
Algorithms and Requirements for Measuring Network Bandwidth
Jin, Guojun
2002-12-08
This report unveils new algorithms for actively measuring (not estimating) available bandwidths with very low intrusion, computing cross traffic, thus estimating the physical bandwidth, provides mathematical proof that the algorithms are accurate, and addresses conditions, requirements, and limitations for new and existing algorithms for measuring network bandwidths. The paper also discusses a number of important terminologies and issues for network bandwidth measurement, and introduces a fundamental parameter -Maximum Burst Size that is critical for implementing algorithms based on multiple packets.
NASA Technical Reports Server (NTRS)
Barth, Timothy J.; Lomax, Harvard
1987-01-01
The past decade has seen considerable activity in algorithm development for the Navier-Stokes equations. This has resulted in a wide variety of useful new techniques. Some examples for the numerical solution of the Navier-Stokes equations are presented, divided into two parts. One is devoted to the incompressible Navier-Stokes equations, and the other to the compressible form.
NASA Astrophysics Data System (ADS)
Suliman, Suha Ibrahim
Landsat 7 Enhanced Thematic Mapper Plus (ETM+) Scan Line Corrector (SLC) device, which corrects for the satellite motion, has failed since May 2003 resulting in a loss of about 22% of the data. To improve the reconstruction of Landsat 7 SLC-off images, Locally Linear Manifold (LLM) model is proposed for filling gaps in hyperspectral imagery. In this approach, each spectral band is modeled as a non-linear locally affine manifold that can be learned from the matching bands at different time instances. Moreover, each band is divided into small overlapping spatial patches. In particular, each patch is considered to be a linear combination (approximately on an affine space) of a set of corresponding patches from the same location that are adjacent in time or from the same season of the year. Fill patches are selected from Landsat 5 Thematic Mapper (TM) products of the year 1984 through 2011 which have similar spatial and radiometric resolution as Landsat 7 products. Using this approach, the gap-filling process involves feasible point on the learned manifold to approximate the missing pixels. The proposed LLM framework is compared to some existing single-source (Average and Inverse Distance Weight (IDW)) and multi- source (Local Linear Histogram Matching (LLHM) and Adaptive Window Linear Histogram Matching (AWLHM)) gap-filling methodologies. We analyze the effectiveness of the proposed LLM approach through simulation examples with known ground-truth. It is shown that the LLM-model driven approach outperforms all existing recovery methods considered in this study. The superiority of LLM is illustrated by providing better reconstructed images with higher accuracy even over heterogeneous landscape. Moreover, it is relatively simple to realize algorithmically, and it needs much less computing time when compared to the state- of-the art AWLHM approach.
A scalable and practical one-pass clustering algorithm for recommender system
NASA Astrophysics Data System (ADS)
Khalid, Asra; Ghazanfar, Mustansar Ali; Azam, Awais; Alahmari, Saad Ali
2015-12-01
KMeans clustering-based recommendation algorithms have been proposed claiming to increase the scalability of recommender systems. One potential drawback of these algorithms is that they perform training offline and hence cannot accommodate the incremental updates with the arrival of new data, making them unsuitable for the dynamic environments. From this line of research, a new clustering algorithm called One-Pass is proposed, which is a simple, fast, and accurate. We show empirically that the proposed algorithm outperforms K-Means in terms of recommendation and training time while maintaining a good level of accuracy.
BoCluSt: Bootstrap Clustering Stability Algorithm for Community Detection
Garcia, Carlos
2016-01-01
The identification of modules or communities in sets of related variables is a key step in the analysis and modeling of biological systems. Procedures for this identification are usually designed to allow fast analyses of very large datasets and may produce suboptimal results when these sets are of a small to moderate size. This article introduces BoCluSt, a new, somewhat more computationally intensive, community detection procedure that is based on combining a clustering algorithm with a measure of stability under bootstrap resampling. Both computer simulation and analyses of experimental data showed that BoCluSt can outperform current procedures in the identification of multiple modules in data sets with a moderate number of variables. In addition, the procedure provides users with a null distribution of results to evaluate the support for the existence of community structure in the data. BoCluSt takes individual measures for a set of variables as input, and may be a valuable and robust exploratory tool of network analysis, as it provides 1) an estimation of the best partition of variables into modules, 2) a measure of the support for the existence of modular structures, and 3) an overall description of the whole structure, which may reveal hierarchical modular situations, in which modules are composed of smaller sub-modules. PMID:27258041
Efficient Record Linkage Algorithms Using Complete Linkage Clustering
Mamun, Abdullah-Al; Aseltine, Robert; Rajasekaran, Sanguthevar
2016-01-01
Data from different agencies share data of the same individuals. Linking these datasets to identify all the records belonging to the same individuals is a crucial and challenging problem, especially given the large volumes of data. A large number of available algorithms for record linkage are prone to either time inefficiency or low-accuracy in finding matches and non-matches among the records. In this paper we propose efficient as well as reliable sequential and parallel algorithms for the record linkage problem employing hierarchical clustering methods. We employ complete linkage hierarchical clustering algorithms to address this problem. In addition to hierarchical clustering, we also use two other techniques: elimination of duplicate records and blocking. Our algorithms use sorting as a sub-routine to identify identical copies of records. We have tested our algorithms on datasets with millions of synthetic records. Experimental results show that our algorithms achieve nearly 100% accuracy. Parallel implementations achieve almost linear speedups. Time complexities of these algorithms do not exceed those of previous best-known algorithms. Our proposed algorithms outperform previous best-known algorithms in terms of accuracy consuming reasonable run times. PMID:27124604
2015-01-01
Color is one of the most prominent features of an image and used in many skin and face detection applications. Color space transformation is widely used by researchers to improve face and skin detection performance. Despite the substantial research efforts in this area, choosing a proper color space in terms of skin and face classification performance which can address issues like illumination variations, various camera characteristics and diversity in skin color tones has remained an open issue. This research proposes a new three-dimensional hybrid color space termed SKN by employing the Genetic Algorithm heuristic and Principal Component Analysis to find the optimal representation of human skin color in over seventeen existing color spaces. Genetic Algorithm heuristic is used to find the optimal color component combination setup in terms of skin detection accuracy while the Principal Component Analysis projects the optimal Genetic Algorithm solution to a less complex dimension. Pixel wise skin detection was used to evaluate the performance of the proposed color space. We have employed four classifiers including Random Forest, Naïve Bayes, Support Vector Machine and Multilayer Perceptron in order to generate the human skin color predictive model. The proposed color space was compared to some existing color spaces and shows superior results in terms of pixel-wise skin detection accuracy. Experimental results show that by using Random Forest classifier, the proposed SKN color space obtained an average F-score and True Positive Rate of 0.953 and False Positive Rate of 0.0482 which outperformed the existing color spaces in terms of pixel wise skin detection accuracy. The results also indicate that among the classifiers used in this study, Random Forest is the most suitable classifier for pixel wise skin detection applications. PMID:26267377
Maktabdar Oghaz, Mahdi; Maarof, Mohd Aizaini; Zainal, Anazida; Rohani, Mohd Foad; Yaghoubyan, S Hadi
2015-01-01
Color is one of the most prominent features of an image and used in many skin and face detection applications. Color space transformation is widely used by researchers to improve face and skin detection performance. Despite the substantial research efforts in this area, choosing a proper color space in terms of skin and face classification performance which can address issues like illumination variations, various camera characteristics and diversity in skin color tones has remained an open issue. This research proposes a new three-dimensional hybrid color space termed SKN by employing the Genetic Algorithm heuristic and Principal Component Analysis to find the optimal representation of human skin color in over seventeen existing color spaces. Genetic Algorithm heuristic is used to find the optimal color component combination setup in terms of skin detection accuracy while the Principal Component Analysis projects the optimal Genetic Algorithm solution to a less complex dimension. Pixel wise skin detection was used to evaluate the performance of the proposed color space. We have employed four classifiers including Random Forest, Naïve Bayes, Support Vector Machine and Multilayer Perceptron in order to generate the human skin color predictive model. The proposed color space was compared to some existing color spaces and shows superior results in terms of pixel-wise skin detection accuracy. Experimental results show that by using Random Forest classifier, the proposed SKN color space obtained an average F-score and True Positive Rate of 0.953 and False Positive Rate of 0.0482 which outperformed the existing color spaces in terms of pixel wise skin detection accuracy. The results also indicate that among the classifiers used in this study, Random Forest is the most suitable classifier for pixel wise skin detection applications. PMID:26267377
Ferragina, A; de los Campos, G; Vazquez, A I; Cecchinato, A; Bittante, G
2015-11-01
calibration to external validation methods, and in moving from PLS and MPLS to Bayesian methods, particularly Bayes A and Bayes B. The maximum R(2) value of validation was obtained with Bayes B and Bayes A. For the FA, C10:0 (% of each FA on total FA basis) had the highest R(2) (0.75, achieved with Bayes A and Bayes B), and among the technological traits, fresh cheese yield R(2) of 0.82 (achieved with Bayes B). These 2 methods have proven to be useful instruments in shrinking and selecting very informative wavelengths and inferring the structure and functions of the analyzed traits. We conclude that Bayesian models are powerful tools for deriving calibration equations, and, importantly, these equations can be easily developed using existing open-source software. As part of our study, we provide scripts based on the open source R software BGLR, which can be used to train customized prediction equations for other traits or populations. PMID:26387015
A new frame-based registration algorithm.
Yan, C H; Whalen, R T; Beaupre, G S; Sumanaweera, T S; Yen, S Y; Napel, S
1998-01-01
This paper presents a new algorithm for frame registration. Our algorithm requires only that the frame be comprised of straight rods, as opposed to the N structures or an accurate frame model required by existing algorithms. The algorithm utilizes the full 3D information in the frame as well as a least squares weighting scheme to achieve highly accurate registration. We use simulated CT data to assess the accuracy of our algorithm. We compare the performance of the proposed algorithm to two commonly used algorithms. Simulation results show that the proposed algorithm is comparable to the best existing techniques with knowledge of the exact mathematical frame model. For CT data corrupted with an unknown in-plane rotation or translation, the proposed technique is also comparable to the best existing techniques. However, in situations where there is a discrepancy of more than 2 mm (0.7% of the frame dimension) between the frame and the mathematical model, the proposed technique is significantly better (p < or = 0.05) than the existing techniques. The proposed algorithm can be applied to any existing frame without modification. It provides better registration accuracy and is robust against model mis-match. It allows greater flexibility on the frame structure. Lastly, it reduces the frame construction cost as adherence to a concise model is not required. PMID:9472834
A new frame-based registration algorithm
NASA Technical Reports Server (NTRS)
Yan, C. H.; Whalen, R. T.; Beaupre, G. S.; Sumanaweera, T. S.; Yen, S. Y.; Napel, S.
1998-01-01
This paper presents a new algorithm for frame registration. Our algorithm requires only that the frame be comprised of straight rods, as opposed to the N structures or an accurate frame model required by existing algorithms. The algorithm utilizes the full 3D information in the frame as well as a least squares weighting scheme to achieve highly accurate registration. We use simulated CT data to assess the accuracy of our algorithm. We compare the performance of the proposed algorithm to two commonly used algorithms. Simulation results show that the proposed algorithm is comparable to the best existing techniques with knowledge of the exact mathematical frame model. For CT data corrupted with an unknown in-plane rotation or translation, the proposed technique is also comparable to the best existing techniques. However, in situations where there is a discrepancy of more than 2 mm (0.7% of the frame dimension) between the frame and the mathematical model, the proposed technique is significantly better (p < or = 0.05) than the existing techniques. The proposed algorithm can be applied to any existing frame without modification. It provides better registration accuracy and is robust against model mis-match. It allows greater flexibility on the frame structure. Lastly, it reduces the frame construction cost as adherence to a concise model is not required.
Receiver Diversity Combining Using Evolutionary Algorithms in Rayleigh Fading Channel
Akbari, Mohsen; Manesh, Mohsen Riahi
2014-01-01
In diversity combining at the receiver, the output signal-to-noise ratio (SNR) is often maximized by using the maximal ratio combining (MRC) provided that the channel is perfectly estimated at the receiver. However, channel estimation is rarely perfect in practice, which results in deteriorating the system performance. In this paper, an imperialistic competitive algorithm (ICA) is proposed and compared with two other evolutionary based algorithms, namely, particle swarm optimization (PSO) and genetic algorithm (GA), for diversity combining of signals travelling across the imperfect channels. The proposed algorithm adjusts the combiner weights of the received signal components in such a way that maximizes the SNR and minimizes the bit error rate (BER). The results indicate that the proposed method eliminates the need of channel estimation and can outperform the conventional diversity combining methods. PMID:25045725
Alternating minimization algorithm for speckle reduction with a shifting technique.
Woo, Hyenkyun; Yun, Sangwoon
2012-04-01
Speckles (multiplicative noise) in synthetic aperture radar (SAR) make it difficult to interpret the observed image. Due to the edge-preserving feature of total variation (TV), variational models with TV regularization have attracted much interest in reducing speckles. Algorithms based on the augmented Lagrangian function have been proposed to efficiently solve speckle-reduction variational models with TV regularization. However, these algorithms require inner iterations or inverses involving the Laplacian operator at each iteration. In this paper, we adapt Tseng's alternating minimization algorithm with a shifting technique to efficiently remove the speckle without any inner iterations or inverses involving the Laplacian operator. The proposed method is very simple and highly parallelizable; therefore, it is very efficient to despeckle huge-size SAR images. Numerical results show that our proposed method outperforms the state-of-the-art algorithms for speckle-reduction variational models with a TV regularizer in terms of central-processing-unit time. PMID:22106149
Linear antenna array optimization using flower pollination algorithm.
Saxena, Prerna; Kothari, Ashwin
2016-01-01
Flower pollination algorithm (FPA) is a new nature-inspired evolutionary algorithm used to solve multi-objective optimization problems. The aim of this paper is to introduce FPA to the electromagnetics and antenna community for the optimization of linear antenna arrays. FPA is applied for the first time to linear array so as to obtain optimized antenna positions in order to achieve an array pattern with minimum side lobe level along with placement of deep nulls in desired directions. Various design examples are presented that illustrate the use of FPA for linear antenna array optimization, and subsequently the results are validated by benchmarking along with results obtained using other state-of-the-art, nature-inspired evolutionary algorithms such as particle swarm optimization, ant colony optimization and cat swarm optimization. The results suggest that in most cases, FPA outperforms the other evolutionary algorithms and at times it yields a similar performance. PMID:27066339
Preconditioned Alternating Projection Algorithms for Maximum a Posteriori ECT Reconstruction.
Krol, Andrzej; Li, Si; Shen, Lixin; Xu, Yuesheng
2012-11-01
We propose a preconditioned alternating projection algorithm (PAPA) for solving the maximum a posteriori (MAP) emission computed tomography (ECT) reconstruction problem. Specifically, we formulate the reconstruction problem as a constrained convex optimization problem with the total variation (TV) regularization. We then characterize the solution of the constrained convex optimization problem and show that it satisfies a system of fixed-point equations defined in terms of two proximity operators raised from the convex functions that define the TV-norm and the constrain involved in the problem. The characterization (of the solution) via the proximity operators that define two projection operators naturally leads to an alternating projection algorithm for finding the solution. For efficient numerical computation, we introduce to the alternating projection algorithm a preconditioning matrix (the EM-preconditioner) for the dense system matrix involved in the optimization problem. We prove theoretically convergence of the preconditioned alternating projection algorithm. In numerical experiments, performance of our algorithms, with an appropriately selected preconditioning matrix, is compared with performance of the conventional MAP expectation-maximization (MAP-EM) algorithm with TV regularizer (EM-TV) and that of the recently developed nested EM-TV algorithm for ECT reconstruction. Based on the numerical experiments performed in this work, we observe that the alternating projection algorithm with the EM-preconditioner outperforms significantly the EM-TV in all aspects including the convergence speed, the noise in the reconstructed images and the image quality. It also outperforms the nested EM-TV in the convergence speed while providing comparable image quality. PMID:23271835
Preconditioned Alternating Projection Algorithms for Maximum a Posteriori ECT Reconstruction
Krol, Andrzej; Li, Si; Shen, Lixin; Xu, Yuesheng
2012-01-01
We propose a preconditioned alternating projection algorithm (PAPA) for solving the maximum a posteriori (MAP) emission computed tomography (ECT) reconstruction problem. Specifically, we formulate the reconstruction problem as a constrained convex optimization problem with the total variation (TV) regularization. We then characterize the solution of the constrained convex optimization problem and show that it satisfies a system of fixed-point equations defined in terms of two proximity operators raised from the convex functions that define the TV-norm and the constrain involved in the problem. The characterization (of the solution) via the proximity operators that define two projection operators naturally leads to an alternating projection algorithm for finding the solution. For efficient numerical computation, we introduce to the alternating projection algorithm a preconditioning matrix (the EM-preconditioner) for the dense system matrix involved in the optimization problem. We prove theoretically convergence of the preconditioned alternating projection algorithm. In numerical experiments, performance of our algorithms, with an appropriately selected preconditioning matrix, is compared with performance of the conventional MAP expectation-maximization (MAP-EM) algorithm with TV regularizer (EM-TV) and that of the recently developed nested EM-TV algorithm for ECT reconstruction. Based on the numerical experiments performed in this work, we observe that the alternating projection algorithm with the EM-preconditioner outperforms significantly the EM-TV in all aspects including the convergence speed, the noise in the reconstructed images and the image quality. It also outperforms the nested EM-TV in the convergence speed while providing comparable image quality. PMID:23271835
Scalable Nearest Neighbor Algorithms for High Dimensional Data.
Muja, Marius; Lowe, David G
2014-11-01
For many computer vision and machine learning problems, large training sets are key for good performance. However, the most computationally expensive part of many computer vision and machine learning algorithms consists of finding nearest neighbor matches to high dimensional vectors that represent the training data. We propose new algorithms for approximate nearest neighbor matching and evaluate and compare them with previous algorithms. For matching high dimensional features, we find two algorithms to be the most efficient: the randomized k-d forest and a new algorithm proposed in this paper, the priority search k-means tree. We also propose a new algorithm for matching binary features by searching multiple hierarchical clustering trees and show it outperforms methods typically used in the literature. We show that the optimal nearest neighbor algorithm and its parameters depend on the data set characteristics and describe an automated configuration procedure for finding the best algorithm to search a particular data set. In order to scale to very large data sets that would otherwise not fit in the memory of a single machine, we propose a distributed nearest neighbor matching framework that can be used with any of the algorithms described in the paper. All this research has been released as an open source library called fast library for approximate nearest neighbors (FLANN), which has been incorporated into OpenCV and is now one of the most popular libraries for nearest neighbor matching. PMID:26353063
A new improved artificial bee colony algorithm for ship hull form optimization
NASA Astrophysics Data System (ADS)
Huang, Fuxin; Wang, Lijue; Yang, Chi
2016-04-01
The artificial bee colony (ABC) algorithm is a relatively new swarm intelligence-based optimization algorithm. Its simplicity of implementation, relatively few parameter settings and promising optimization capability make it widely used in different fields. However, it has problems of slow convergence due to its solution search equation. Here, a new solution search equation based on a combination of the elite solution pool and the block perturbation scheme is proposed to improve the performance of the algorithm. In addition, two different solution search equations are used by employed bees and onlooker bees to balance the exploration and exploitation of the algorithm. The developed algorithm is validated by a set of well-known numerical benchmark functions. It is then applied to optimize two ship hull forms with minimum resistance. The tested results show that the proposed new improved ABC algorithm can outperform the ABC algorithm in most of the tested problems.
Spaceborne SAR Imaging Algorithm for Coherence Optimized
Qiu, Zhiwei; Yue, Jianping; Wang, Xueqin; Yue, Shun
2016-01-01
This paper proposes SAR imaging algorithm with largest coherence based on the existing SAR imaging algorithm. The basic idea of SAR imaging algorithm in imaging processing is that output signal can have maximum signal-to-noise ratio (SNR) by using the optimal imaging parameters. Traditional imaging algorithm can acquire the best focusing effect, but would bring the decoherence phenomenon in subsequent interference process. Algorithm proposed in this paper is that SAR echo adopts consistent imaging parameters in focusing processing. Although the SNR of the output signal is reduced slightly, their coherence is ensured greatly, and finally the interferogram with high quality is obtained. In this paper, two scenes of Envisat ASAR data in Zhangbei are employed to conduct experiment for this algorithm. Compared with the interferogram from the traditional algorithm, the results show that this algorithm is more suitable for SAR interferometry (InSAR) research and application. PMID:26871446
An Energy Aware Adaptive Sampling Algorithm for Energy Harvesting WSN with Energy Hungry Sensors
Srbinovski, Bruno; Magno, Michele; Edwards-Murphy, Fiona; Pakrashi, Vikram; Popovici, Emanuel
2016-01-01
Wireless sensor nodes have a limited power budget, though they are often expected to be functional in the field once deployed for extended periods of time. Therefore, minimization of energy consumption and energy harvesting technology in Wireless Sensor Networks (WSN) are key tools for maximizing network lifetime, and achieving self-sustainability. This paper proposes an energy aware Adaptive Sampling Algorithm (ASA) for WSN with power hungry sensors and harvesting capabilities, an energy management technique that can be implemented on any WSN platform with enough processing power to execute the proposed algorithm. An existing state-of-the-art ASA developed for wireless sensor networks with power hungry sensors is optimized and enhanced to adapt the sampling frequency according to the available energy of the node. The proposed algorithm is evaluated using two in-field testbeds that are supplied by two different energy harvesting sources (solar and wind). Simulation and comparison between the state-of-the-art ASA and the proposed energy aware ASA (EASA) in terms of energy durability are carried out using in-field measured harvested energy (using both wind and solar sources) and power hungry sensors (ultrasonic wind sensor and gas sensors). The simulation results demonstrate that using ASA in combination with an energy aware function on the nodes can drastically increase the lifetime of a WSN node and enable self-sustainability. In fact, the proposed EASA in conjunction with energy harvesting capability can lead towards perpetual WSN operation and significantly outperform the state-of-the-art ASA. PMID:27043559
An Energy Aware Adaptive Sampling Algorithm for Energy Harvesting WSN with Energy Hungry Sensors.
Srbinovski, Bruno; Magno, Michele; Edwards-Murphy, Fiona; Pakrashi, Vikram; Popovici, Emanuel
2016-01-01
Wireless sensor nodes have a limited power budget, though they are often expected to be functional in the field once deployed for extended periods of time. Therefore, minimization of energy consumption and energy harvesting technology in Wireless Sensor Networks (WSN) are key tools for maximizing network lifetime, and achieving self-sustainability. This paper proposes an energy aware Adaptive Sampling Algorithm (ASA) for WSN with power hungry sensors and harvesting capabilities, an energy management technique that can be implemented on any WSN platform with enough processing power to execute the proposed algorithm. An existing state-of-the-art ASA developed for wireless sensor networks with power hungry sensors is optimized and enhanced to adapt the sampling frequency according to the available energy of the node. The proposed algorithm is evaluated using two in-field testbeds that are supplied by two different energy harvesting sources (solar and wind). Simulation and comparison between the state-of-the-art ASA and the proposed energy aware ASA (EASA) in terms of energy durability are carried out using in-field measured harvested energy (using both wind and solar sources) and power hungry sensors (ultrasonic wind sensor and gas sensors). The simulation results demonstrate that using ASA in combination with an energy aware function on the nodes can drastically increase the lifetime of a WSN node and enable self-sustainability. In fact, the proposed EASA in conjunction with energy harvesting capability can lead towards perpetual WSN operation and significantly outperform the state-of-the-art ASA. PMID:27043559
Is there a best hyperspectral detection algorithm?
NASA Astrophysics Data System (ADS)
Manolakis, D.; Lockwood, R.; Cooley, T.; Jacobson, J.
2009-05-01
A large number of hyperspectral detection algorithms have been developed and used over the last two decades. Some algorithms are based on highly sophisticated mathematical models and methods; others are derived using intuition and simple geometrical concepts. The purpose of this paper is threefold. First, we discuss the key issues involved in the design and evaluation of detection algorithms for hyperspectral imaging data. Second, we present a critical review of existing detection algorithms for practical hyperspectral imaging applications. Finally, we argue that the "apparent" superiority of sophisticated algorithms with simulated data or in laboratory conditions, does not necessarily translate to superiority in real-world applications.
Macpherson, Peter; Lalloo, David G; Thindwa, Deus; Webb, Emily L; Squire, S Bertel; Chipungu, Geoffrey A; Desmond, Nicola; Makombe, Simon D; Taegtmeyer, Miriam; Choko, Augustine T; Corbett, Elizabeth L
2014-02-01
The accuracy of a novel community health worker antiretroviral therapy eligibility assessment tool was examined in community members in Blantyre, Malawi. Nurses independently performed World Health Organization (WHO) staging and CD4 counts. One hundred ten (55.6%) of 198 HIV-positive participants had a CD4 count of <350 cells per cubic millimeter. The community health worker tool significantly outperformed WHO clinical staging in identifying CD4 count of <350 cells per cubic millimeter in terms of sensitivity (41% vs. 19%), positive predictive value (75% vs. 68%), negative predictive values (53% vs. 47%), and area under the receiver-operator curve (0.62 vs. 0.54; P = 0.017). Reliance on WHO staging is likely to result in missed and delayed antiretroviral therapy initiation. PMID:23846567
Wang, San-Yuan; Kuo, Ching-Hua; Tseng, Yufeng J
2015-03-01
Able to detect known and unknown metabolites, untargeted metabolomics has shown great potential in identifying novel biomarkers. However, elucidating all possible liquid chromatography/time-of-flight mass spectrometry (LC/TOF-MS) ion signals in a complex biological sample remains challenging since many ions are not the products of metabolites. Methods of reducing ions not related to metabolites or simply directly detecting metabolite related (pure) ions are important. In this work, we describe PITracer, a novel algorithm that accurately detects the pure ions of a LC/TOF-MS profile to extract pure ion chromatograms and detect chromatographic peaks. PITracer estimates the relative mass difference tolerance of ions and calibrates the mass over charge (m/z) values for peak detection algorithms with an additional option to further mass correction with respect to a user-specified metabolite. PITracer was evaluated using two data sets containing 373 human metabolite standards, including 5 saturated standards considered to be split peaks resultant from huge m/z fluctuation, and 12 urine samples spiked with 50 forensic drugs of varying concentrations. Analysis of these data sets show that PITracer correctly outperformed existing state-of-art algorithm and extracted the pure ion chromatograms of the 5 saturated standards without generating split peaks and detected the forensic drugs with high recall, precision, and F-score and small mass error. PMID:25622715
NASA Astrophysics Data System (ADS)
Hu, Yifan; Ding, Yongsheng; Hao, Kuangrong; Ren, Lihong; Han, Hua
2014-03-01
The growth of mobile handheld devices promotes sink mobility in an increasing number of wireless sensor networks (WSNs) applications. The movement of the sink may lead to the breakage of existing routes of WSNs, thus the routing recovery problem is a critical challenge. In order to maintain the available route from each source node to the sink, we propose an immune orthogonal learning particle swarm optimisation algorithm (IOLPSOA) to provide fast routing recovery from path failure due to the sink movement, and construct the efficient alternative path to repair the route. Due to its efficient bio-heuristic routing recovery mechanism in the algorithm, the orthogonal learning strategy can guide particles to fly on better directions by constructing a much promising and efficient exemplar, and the immune mechanism can maintain the diversity of the particles. We discuss the implementation of the IOLPSOA-based routing protocol and present the performance evaluation through several simulation experiments. The results demonstrate that the IOLPSOA-based protocol outperforms the other three protocols, which can efficiently repair the routing topology changed by the sink movement, reduce the communication overhead and prolong the lifetime of WSNs with mobile sink.
A graph spectrum based geometric biclustering algorithm.
Wang, Doris Z; Yan, Hong
2013-01-21
Biclustering is capable of performing simultaneous clustering on two dimensions of a data matrix and has many applications in pattern classification. For example, in microarray experiments, a subset of genes is co-expressed in a subset of conditions, and biclustering algorithms can be used to detect the coherent patterns in the data for further analysis of function. In this paper, we present a graph spectrum based geometric biclustering (GSGBC) algorithm. In the geometrical view, biclusters can be seen as different linear geometrical patterns in high dimensional spaces. Based on this, the modified Hough transform is used to find the Hough vector (HV) corresponding to sub-bicluster patterns in 2D spaces. A graph can be built regarding each HV as a node. The graph spectrum is utilized to identify the eigengroups in which the sub-biclusters are grouped naturally to produce larger biclusters. Through a comparative study, we find that the GSGBC achieves as good a result as GBC and outperforms other kinds of biclustering algorithms. Also, compared with the original geometrical biclustering algorithm, it reduces the computing time complexity significantly. We also show that biologically meaningful biclusters can be identified by our method from real microarray gene expression data. PMID:23079285
A multi-scale non-local means algorithm for image de-noising
NASA Astrophysics Data System (ADS)
Nercessian, Shahan; Panetta, Karen A.; Agaian, Sos S.
2012-06-01
A highly studied problem in image processing and the field of electrical engineering in general is the recovery of a true signal from its noisy version. Images can be corrupted by noise during their acquisition or transmission stages. As noisy images are visually very poor in quality, and complicate further processing stages of computer vision systems, it is imperative to develop algorithms which effectively remove noise in images. In practice, it is a difficult task to effectively remove the noise while simultaneously retaining the edge structures within the image. Accordingly, many de-noising algorithms have been considered attempt to intelligent smooth the image while still preserving its details. Recently, a non-local means (NLM) de-noising algorithm was introduced, which exploited the redundant nature of images to achieve image de-noising. The algorithm was shown to outperform current de-noising standards, including Gaussian filtering, anisotropic diffusion, total variation minimization, and multi-scale transform coefficient thresholding. However, the NLM algorithm was developed in the spatial domain, and therefore, does not leverage the benefit that multi-scale transforms provide a framework in which signals can be better distinguished by noise. Accordingly, in this paper, a multi-scale NLM (MS-NLM) algorithm is proposed, which combines the advantage of the NLM algorithm and multi-scale image processing techniques. Experimental results via computer simulations illustrate that the MS-NLM algorithm outperforms the NLM, both visually and quantitatively.
NASA Astrophysics Data System (ADS)
Lohmann, Timo
Electric sector models are powerful tools that guide policy makers and stakeholders. Long-term power generation expansion planning models are a prominent example and determine a capacity expansion for an existing power system over a long planning horizon. With the changes in the power industry away from monopolies and regulation, the focus of these models has shifted to competing electric companies maximizing their profit in a deregulated electricity market. In recent years, consumers have started to participate in demand response programs, actively influencing electricity load and price in the power system. We introduce a model that features investment and retirement decisions over a long planning horizon of more than 20 years, as well as an hourly representation of day-ahead electricity markets in which sellers of electricity face buyers. This combination makes our model both unique and challenging to solve. Decomposition algorithms, and especially Benders decomposition, can exploit the model structure. We present a novel method that can be seen as an alternative to generalized Benders decomposition and relies on dynamic linear overestimation. We prove its finite convergence and present computational results, demonstrating its superiority over traditional approaches. In certain special cases of our model, all necessary solution values in the decomposition algorithms can be directly calculated and solving mathematical programming problems becomes entirely obsolete. This leads to highly efficient algorithms that drastically outperform their programming problem-based counterparts. Furthermore, we discuss the implementation of all tailored algorithms and the challenges from a modeling software developer's standpoint, providing an insider's look into the modeling language GAMS. Finally, we apply our model to the Texas power system and design two electricity policies motivated by the U.S. Environment Protection Agency's recently proposed CO2 emissions targets for the
NASA Astrophysics Data System (ADS)
Zhu, Li; He, Yongxiang; Xue, Haidong; Chen, Leichen
Traditional genetic algorithms (GA) displays a disadvantage of early-constringency in dealing with scheduling problem. To improve the crossover operators and mutation operators self-adaptively, this paper proposes a self-adaptive GA at the target of multitask scheduling optimization under limited resources. The experiment results show that the proposed algorithm outperforms the traditional GA in evolutive ability to deal with complex task scheduling optimization.
Bowen, J.; Dozier, G.
1996-12-31
This paper introduces a hybrid evolutionary hill-climbing algorithm that quickly solves (Constraint Satisfaction Problems (CSPs)). This hybrid uses opportunistic arc and path revision in an interleaved fashion to reduce the size of the search space and to realize when to quit if a CSP is based on an inconsistent constraint network. This hybrid outperforms a well known hill-climbing algorithm, the Iterative Descent Method, on a test suite of 750 randomly generated CSPs.
Three hypothesis algorithm with occlusion reasoning for multiple people tracking
NASA Astrophysics Data System (ADS)
Reta, Carolina; Altamirano, Leopoldo; Gonzalez, Jesus A.; Medina-Carnicer, Rafael
2015-01-01
This work proposes a detection-based tracking algorithm able to locate and keep the identity of multiple people, who may be occluded, in uncontrolled stationary environments. Our algorithm builds a tracking graph that models spatio-temporal relationships among attributes of interacting people to predict and resolve partial and total occlusions. When a total occlusion occurs, the algorithm generates various hypotheses about the location of the occluded person considering three cases: (a) the person keeps the same direction and speed, (b) the person follows the direction and speed of the occluder, and (c) the person remains motionless during occlusion. By analyzing the graph, our algorithm can detect trajectories produced by false alarms and estimate the location of missing or occluded people. Our algorithm performs acceptably under complex conditions, such as partial visibility of individuals getting inside or outside the scene, continuous interactions and occlusions among people, wrong or missing information on the detection of persons, as well as variation of the person's appearance due to illumination changes and background-clutter distracters. Our algorithm was evaluated on test sequences in the field of intelligent surveillance achieving an overall precision of 93%. Results show that our tracking algorithm outperforms even trajectory-based state-of-the-art algorithms.
Generalized Pattern Search Algorithm for Peptide Structure Prediction
Nicosia, Giuseppe; Stracquadanio, Giovanni
2008-01-01
Finding the near-native structure of a protein is one of the most important open problems in structural biology and biological physics. The problem becomes dramatically more difficult when a given protein has no regular secondary structure or it does not show a fold similar to structures already known. This situation occurs frequently when we need to predict the tertiary structure of small molecules, called peptides. In this research work, we propose a new ab initio algorithm, the generalized pattern search algorithm, based on the well-known class of Search-and-Poll algorithms. We performed an extensive set of simulations over a well-known set of 44 peptides to investigate the robustness and reliability of the proposed algorithm, and we compared the peptide conformation with a state-of-the-art algorithm for peptide structure prediction known as PEPstr. In particular, we tested the algorithm on the instances proposed by the originators of PEPstr, to validate the proposed algorithm; the experimental results confirm that the generalized pattern search algorithm outperforms PEPstr by 21.17% in terms of average root mean-square deviation, RMSD Cα. PMID:18487293
Chang, C.Y.
1986-01-01
New results on efficient forms of decoding convolutional codes based on Viterbi and stack algorithms using systolic array architecture are presented. Some theoretical aspects of systolic arrays are also investigated. First, systolic array implementation of Viterbi algorithm is considered, and various properties of convolutional codes are derived. A technique called strongly connected trellis decoding is introduced to increase the efficient utilization of all the systolic array processors. The issues dealing with the composite branch metric generation, survivor updating, overall system architecture, throughput rate, and computations overhead ratio are also investigated. Second, the existing stack algorithm is modified and restated in a more concise version so that it can be efficiently implemented by a special type of systolic array called systolic priority queue. Three general schemes of systolic priority queue based on random access memory, shift register, and ripple register are proposed. Finally, a systematic approach is presented to design systolic arrays for certain general classes of recursively formulated algorithms.
Algorithm for genome contig assembly. Final report
1995-09-01
An algorithm was developed for genome contig assembly which extended the range of data types that could be included in assembly and which ran on the order of a hundred times faster than the algorithm it replaced. Maps of all existing cosmid clone and YAC data at the Human Genome Information Resource were assembled using ICA. The resulting maps are summarized.
Reactive Collision Avoidance Algorithm
NASA Technical Reports Server (NTRS)
Scharf, Daniel; Acikmese, Behcet; Ploen, Scott; Hadaegh, Fred
2010-01-01
The reactive collision avoidance (RCA) algorithm allows a spacecraft to find a fuel-optimal trajectory for avoiding an arbitrary number of colliding spacecraft in real time while accounting for acceleration limits. In addition to spacecraft, the technology can be used for vehicles that can accelerate in any direction, such as helicopters and submersibles. In contrast to existing, passive algorithms that simultaneously design trajectories for a cluster of vehicles working to achieve a common goal, RCA is implemented onboard spacecraft only when an imminent collision is detected, and then plans a collision avoidance maneuver for only that host vehicle, thus preventing a collision in an off-nominal situation for which passive algorithms cannot. An example scenario for such a situation might be when a spacecraft in the cluster is approaching another one, but enters safe mode and begins to drift. Functionally, the RCA detects colliding spacecraft, plans an evasion trajectory by solving the Evasion Trajectory Problem (ETP), and then recovers after the collision is avoided. A direct optimization approach was used to develop the algorithm so it can run in real time. In this innovation, a parameterized class of avoidance trajectories is specified, and then the optimal trajectory is found by searching over the parameters. The class of trajectories is selected as bang-off-bang as motivated by optimal control theory. That is, an avoiding spacecraft first applies full acceleration in a constant direction, then coasts, and finally applies full acceleration to stop. The parameter optimization problem can be solved offline and stored as a look-up table of values. Using a look-up table allows the algorithm to run in real time. Given a colliding spacecraft, the properties of the collision geometry serve as indices of the look-up table that gives the optimal trajectory. For multiple colliding spacecraft, the set of trajectories that avoid all spacecraft is rapidly searched on
Fontana, W.
1990-12-13
In this paper complex adaptive systems are defined by a self- referential loop in which objects encode functions that act back on these objects. A model for this loop is presented. It uses a simple recursive formal language, derived from the lambda-calculus, to provide a semantics that maps character strings into functions that manipulate symbols on strings. The interaction between two functions, or algorithms, is defined naturally within the language through function composition, and results in the production of a new function. An iterated map acting on sets of functions and a corresponding graph representation are defined. Their properties are useful to discuss the behavior of a fixed size ensemble of randomly interacting functions. This function gas'', or Turning gas'', is studied under various conditions, and evolves cooperative interaction patterns of considerable intricacy. These patterns adapt under the influence of perturbations consisting in the addition of new random functions to the system. Different organizations emerge depending on the availability of self-replicators.
Service Discovery Framework Supported by EM Algorithm and Bayesian Classifier
NASA Astrophysics Data System (ADS)
Peng, Yanbin
Service oriented computing has become the main stream research field nowadays. Meanwhile, machine learning is a promising AI technology which can enhance the performance of traditional algorithm. Therefore, aiming at solving service discovery problem, this paper imports Bayesian classifier to web service discovery framework, which can improve service querying speed. In this framework, services in service library become training set of Bayesian classifier, service query becomes a testing sample. Service matchmaking process can be executed in related service class, which has fewer services, thus can save time. Due to don't know the class of service in training set, EM algorithm is used to estimate prior probability and likelihood functions. Experiment results show that the EM algorithm and Bayesian classifier supported method outperforms other methods in time complexity.
New validation algorithm for data association in SLAM.
Guerra, Edmundo; Munguia, Rodrigo; Bolea, Yolanda; Grau, Antoni
2013-09-01
In this work, a novel data validation algorithm for a single-camera SLAM system is introduced. A 6-degree-of-freedom monocular SLAM method based on the delayed inverse-depth (DI-D) feature initialization is used as a benchmark. This SLAM methodology has been improved with the introduction of the proposed data association batch validation technique, the highest order hypothesis compatibility test, HOHCT. This new algorithm is based on the evaluation of statistically compatible hypotheses, and a search algorithm designed to exploit the characteristics of delayed inverse-depth technique. In order to show the capabilities of the proposed technique, experimental tests have been compared with classical methods. The results of the proposed technique outperformed the results of the classical approaches. PMID:23701896
Study of genetic direct search algorithms for function optimization
NASA Technical Reports Server (NTRS)
Zeigler, B. P.
1974-01-01
The results are presented of a study to determine the performance of genetic direct search algorithms in solving function optimization problems arising in the optimal and adaptive control areas. The findings indicate that: (1) genetic algorithms can outperform standard algorithms in multimodal and/or noisy optimization situations, but suffer from lack of gradient exploitation facilities when gradient information can be utilized to guide the search. (2) For large populations, or low dimensional function spaces, mutation is a sufficient operator. However for small populations or high dimensional functions, crossover applied in about equal frequency with mutation is an optimum combination. (3) Complexity, in terms of storage space and running time, is significantly increased when population size is increased or the inversion operator, or the second level adaptation routine is added to the basic structure.
Algorithms versus architectures for computational chemistry
NASA Technical Reports Server (NTRS)
Partridge, H.; Bauschlicher, C. W., Jr.
1986-01-01
The algorithms employed are computationally intensive and, as a result, increased performance (both algorithmic and architectural) is required to improve accuracy and to treat larger molecular systems. Several benchmark quantum chemistry codes are examined on a variety of architectures. While these codes are only a small portion of a typical quantum chemistry library, they illustrate many of the computationally intensive kernels and data manipulation requirements of some applications. Furthermore, understanding the performance of the existing algorithm on present and proposed supercomputers serves as a guide for future programs and algorithm development. The algorithms investigated are: (1) a sparse symmetric matrix vector product; (2) a four index integral transformation; and (3) the calculation of diatomic two electron Slater integrals. The vectorization strategies are examined for these algorithms for both the Cyber 205 and Cray XMP. In addition, multiprocessor implementations of the algorithms are looked at on the Cray XMP and on the MIT static data flow machine proposed by DENNIS.
Adaptive optics image deconvolution based on a modified Richardson-Lucy algorithm
NASA Astrophysics Data System (ADS)
Chen, Bo; Geng, Ze-xun; Yan, Xiao-dong; Yang, Yang; Sui, Xue-lian; Zhao, Zhen-lei
2007-12-01
Adaptive optical (AO) system provides a real-time compensation for atmospheric turbulence. However, the correction is often only partial, and a deconvolution is required for reaching the diffraction limit. The Richardson-Lucy (R-L) Algorithm is the technique most widely used for AO image deconvolution, but Standard R-L Algorithm (SRLA) is often puzzled by speckling phenomenon, wraparound artifact and noise problem. A Modified R-L Algorithm (MRLA) for AO image deconvolution is presented. This novel algorithm applies Magain's correct sampling approach and incorporating noise statistics to Standard R-L Algorithm. The alternant iterative method is applied to estimate PSF and object in the novel algorithm. Comparing experiments for indoor data and AO image are done with SRLA and the MRLA in this paper. Experimental results show that this novel MRLA outperforms the SRLA.
Improved satellite image compression and reconstruction via genetic algorithms
NASA Astrophysics Data System (ADS)
Babb, Brendan; Moore, Frank; Peterson, Michael; Lamont, Gary
2008-10-01
A wide variety of signal and image processing applications, including the US Federal Bureau of Investigation's fingerprint compression standard [3] and the JPEG-2000 image compression standard [26], utilize wavelets. This paper describes new research that demonstrates how a genetic algorithm (GA) may be used to evolve transforms that outperform wavelets for satellite image compression and reconstruction under conditions subject to quantization error. The new approach builds upon prior work by simultaneously evolving real-valued coefficients representing matched forward and inverse transform pairs at each of three levels of a multi-resolution analysis (MRA) transform. The training data for this investigation consists of actual satellite photographs of strategic urban areas. Test results show that a dramatic reduction in the error present in reconstructed satellite images may be achieved without sacrificing the compression capabilities of the forward transform. The transforms evolved during this research outperform previous start-of-the-art solutions, which optimized coefficients for the reconstruction transform only. These transforms also outperform wavelets, reducing error by more than 0.76 dB at a quantization level of 64. In addition, transforms trained using representative satellite images do not perform quite as well when subsequently tested against images from other classes (such as fingerprints or portraits). This result suggests that the GA developed for this research is automatically learning to exploit specific attributes common to the class of images represented in the training population.
Applications of genetic algorithms and neural networks to interatomic potentials
NASA Astrophysics Data System (ADS)
Hobday, Steven; Smith, Roger; BelBruno, Joe
1999-06-01
Applications of two modern artificial intelligence (AI) techniques, genetic algorithms (GA) and neural networks (NN) to computer simulations are reported. It is shown that the GA are very useful tools for determining the minimum energy structures of clusters of atoms described by interatomic potential functions and generally outperform other optimisation methods for this task. A number of applications are given including covalent, and close packed structures of single or multi-component atomic species. It is also shown that (many body) interatomic potential functions for multi-component systems can be derived by training a specially constructed NN on a variety of structural data.
Arbab, Alvira Ayoub; Sun, Kyung Chul; Sahito, Iftikhar Ali; Qadir, Muhammad Bilal; Choi, Yun Seon; Jeong, Sung Hoon
2016-03-23
Highly conductive mesoporous carbon structures based on multiwalled carbon nanotubes (MWCNTs) and activated charcoal (AC) were synthesized by an enzymatic dispersion method. The synthesized carbon configuration consists of synchronized structures of highly conductive MWCNT and porous activated charcoal morphology. The proposed carbon structure was used as counter electrode (CE) for quasi-solid-state dye-sensitized solar cells (DSSCs). The AC-doped MWCNT hybrid showed much enhanced electrocatalytic activity (ECA) toward polymer gel electrolyte and revealed a charge transfer resistance (RCT) of 0.60 Ω, demonstrating a fast electron transport mechanism. The exceptional electrocatalytic activity and high conductivity of the AC-doped MWCNT hybrid CE are associated with its synchronized features of high surface area and electronic conductivity, which produces higher interfacial reaction with the quasi-solid electrolyte. Morphological studies confirm the forms of amorphous and conductive 3D carbon structure with high density of CNT colloid. The excessive oxygen surface groups and defect-rich structure can entrap an excessive volume of quasi-solid electrolyte and locate multiple sites for iodide/triiodide catalytic reaction. The resultant D719 DSSC composed of this novel hybrid CE fabricated with polymer gel electrolyte demonstrated an efficiency of 10.05% with a high fill factor (83%), outperforming the Pt electrode. Such facile synthesis of CE together with low cost and sustainability supports the proposed DSSCs' structure to stand out as an efficient next-generation photovoltaic device. PMID:26911208
Proctor, Darby; Essler, Jennifer; Pinto, Ana I.; Wismer, Sharon; Stoinski, Tara; Brosnan, Sarah F.; Bshary, Redouan
2012-01-01
The insight that animals' cognitive abilities are linked to their evolutionary history, and hence their ecology, provides the framework for the comparative approach. Despite primates renowned dietary complexity and social cognition, including cooperative abilities, we here demonstrate that cleaner wrasse outperform three primate species, capuchin monkeys, chimpanzees and orang-utans, in a foraging task involving a choice between two actions, both of which yield identical immediate rewards, but only one of which yields an additional delayed reward. The foraging task decisions involve partner choice in cleaners: they must service visiting client reef fish before resident clients to access both; otherwise the former switch to a different cleaner. Wild caught adult, but not juvenile, cleaners learned to solve the task quickly and relearned the task when it was reversed. The majority of primates failed to perform above chance after 100 trials, which is in sharp contrast to previous studies showing that primates easily learn to choose an action that yields immediate double rewards compared to an alternative action. In conclusion, the adult cleaners' ability to choose a superior action with initially neutral consequences is likely due to repeated exposure in nature, which leads to specific learned optimal foraging decision rules. PMID:23185293
A Cascade Random Forests Algorithm for Predicting Protein-Protein Interaction Sites.
Wei, Zhi-Sen; Yang, Jing-Yu; Shen, Hong-Bin; Yu, Dong-Jun
2015-10-01
Protein-protein interactions exist ubiquitously and play important roles in the life cycles of living cells. The interaction sites (residues) are essential to understanding the underlying mechanisms of protein-protein interactions. Previous research has demonstrated that the accurate identification of protein-protein interaction sites (PPIs) is helpful for developing new therapeutic drugs because many drugs will interact directly with those residues. Because of its significant potential in biological research and drug development, the prediction of PPIs has become an important topic in computational biology. However, a severe data imbalance exists in the PPIs prediction problem, where the number of the majority class samples (non-interacting residues) is far larger than that of the minority class samples (interacting residues). Thus, we developed a novel cascade random forests algorithm (CRF) to address the serious data imbalance that exists in the PPIs prediction problem. The proposed CRF resolves the negative effect of data imbalance by connecting multiple random forests in a cascade-like manner, each of which is trained with a balanced training subset that includes all minority samples and a subset of majority samples using an effective ensemble protocol. Based on the proposed CRF, we implemented a new sequence-based PPIs predictor, called CRF-PPI, which takes the combined features of position-specific scoring matrices, averaged cumulative hydropathy, and predicted relative solvent accessibility as model inputs. Benchmark experiments on both the cross validation and independent validation datasets demonstrated that the proposed CRF-PPI outperformed the state-of-the-art sequence-based PPIs predictors. The source code for CRF-PPI and the benchmark datasets are available online at http://csbio.njust.edu.cn/bioinf/CRF-PPI for free academic use. PMID:26441427
An automatic and fast centerline extraction algorithm for virtual colonoscopy.
Jiang, Guangxiang; Gu, Lixu
2005-01-01
This paper introduces a new refined centerline extraction algorithm, which is based on and significantly improved from distance mapping algorithms. The new approach include three major parts: employing a colon segmentation method; designing and realizing a fast Euclidean Transform algorithm and inducting boundary voxels cutting (BVC) approach. The main contribution is the BVC processing, which greatly speeds up the Dijkstra algorithm and improves the whole performance of the new algorithm. Experimental results demonstrate that the new centerline algorithm was more efficient and accurate comparing with existing algorithms. PMID:17281406
Another hybrid conjugate gradient algorithm for unconstrained optimization
NASA Astrophysics Data System (ADS)
Andrei, Neculai
2008-02-01
Another hybrid conjugate gradient algorithm is subject to analysis. The parameter ? k is computed as a convex combination of beta ^{{HS}}_{k} (Hestenes-Stiefel) and beta ^{{DY}}_{k} (Dai-Yuan) algorithms, i.eE beta ^{C}_{k} = {left( {1 - theta _{k} } right)}beta ^{{HS}}_{k} + theta _{k} beta ^{{DY}}_{k} . The parameter ? k in the convex combination is computed in such a way so that the direction corresponding to the conjugate gradient algorithm to be the Newton direction and the pair (s k , y k ) to satisfy the quasi-Newton equation nabla ^{2} f{left( {x_{{k + 1}} } right)}s_{k} = y_{k} , where s_{k} = x_{{k + 1}} - x_{k} and y_{k} = g_{{k + 1}} - g_{k} . The algorithm uses the standard Wolfe line search conditions. Numerical comparisons with conjugate gradient algorithms show that this hybrid computational scheme outperforms the Hestenes-Stiefel and the Dai-Yuan conjugate gradient algorithms as well as the hybrid conjugate gradient algorithms of Dai and Yuan. A set of 750 unconstrained optimization problems are used, some of them from the CUTE library.
CORDIC Algorithms: Theory And Extensions
NASA Astrophysics Data System (ADS)
Delosme, Jean-Marc
1989-11-01
Optimum algorithms for signal processing are notoriously costly to implement since they usually require intensive linear algebra operations to be performed at very high rates. In these cases a cost-effective solution is to design a pipelined or parallel architecture with special-purpose VLSI processors. One may often lower the hardware cost of such a dedicated architecture by using processors that implement CORDIC-like arithmetic algorithms. Indeed, with CORDIC algorithms, the evaluation and the application of an operation, such as determining a rotation that brings a vector onto another one and rotating other vectors by that amount, require the same time on identical processors and can be fully overlapped in most cases, thus leading to highly efficient implementations. We have shown earlier that a necessary condition for a CORDIC-type algorithm to exist is that the function to be implemented can be represented in terms of a matrix exponential. This paper refines this condition to the ability to represent , the desired function in terms of a rational representation of a matrix exponential. This insight gives us a powerful tool for the design of new CORDIC algorithms. This is demonstrated by rederiving classical CORDIC algorithms and introducing several new ones, for Jacobi rotations, three and higher dimensional rotations, etc.
Does sporadic Leber's disease exist?
Nikoskelainen, E; Nummelin, K; Savontaus, M L
1988-12-01
This study gives some illustrative case reports of the difficulties in the diagnosis of Leber's hereditary optic neuroretinopathy. It underlies the importance of careful family history and search for peripapillary microangiopathy in the maternal relatives of patients suspected to suffer from Leber's disease. The article casts doubt on the existence of so-called sporadic Leber's disease. PMID:2977137
Stability of Bareiss algorithm
NASA Astrophysics Data System (ADS)
Bojanczyk, Adam W.; Brent, Richard P.; de Hoog, F. R.
1991-12-01
In this paper, we present a numerical stability analysis of Bareiss algorithm for solving a symmetric positive definite Toeplitz system of linear equations. We also compare Bareiss algorithm with Levinson algorithm and conclude that the former has superior numerical properties.
Birkhoffian symplectic algorithms derived from Hamiltonian symplectic algorithms
NASA Astrophysics Data System (ADS)
Xin-Lei, Kong; Hui-Bin, Wu; Feng-Xiang, Mei
2016-01-01
In this paper, we focus on the construction of structure preserving algorithms for Birkhoffian systems, based on existing symplectic schemes for the Hamiltonian equations. The key of the method is to seek an invertible transformation which drives the Birkhoffian equations reduce to the Hamiltonian equations. When there exists such a transformation, applying the corresponding inverse map to symplectic discretization of the Hamiltonian equations, then resulting difference schemes are verified to be Birkhoffian symplectic for the original Birkhoffian equations. To illustrate the operation process of the method, we construct several desirable algorithms for the linear damped oscillator and the single pendulum with linear dissipation respectively. All of them exhibit excellent numerical behavior, especially in preserving conserved quantities. Project supported by the National Natural Science Foundation of China (Grant No. 11272050), the Excellent Young Teachers Program of North China University of Technology (Grant No. XN132), and the Construction Plan for Innovative Research Team of North China University of Technology (Grant No. XN129).
Zhang, Ling; Zhang, Yaojun; Wang, Hong; Zou, Jianwen; Siemann, Evan
2013-01-01
Two mechanisms that have been proposed to explain success of invasive plants are unusual biotic interactions, such as enemy release or enhanced mutualisms, and increased resource availability. However, while these mechanisms are usually considered separately, both may be involved in successful invasions. Biotic interactions may be positive or negative and may interact with nutritional resources in determining invasion success. In addition, the effects of different nutrients on invasions may vary. Finally, genetic variation in traits between populations located in introduced versus native ranges may be important for biotic interactions and/or resource use. Here, we investigated the roles of soil biota, resource availability, and plant genetic variation using seedlings of Triadica sebifera in an experiment in the native range (China). We manipulated nitrogen (control or 4 g/m2), phosphorus (control or 0.5 g/m2), soil biota (untreated or sterilized field soil), and plant origin (4 populations from the invasive range, 4 populations from the native range) in a full factorial experiment. Phosphorus addition increased root, stem, and leaf masses. Leaf mass and height growth depended on population origin and soil sterilization. Invasive populations had higher leaf mass and growth rates than native populations did in fresh soil but they had lower, comparable leaf mass and growth rates in sterilized soil. Invasive populations had higher growth rates with phosphorus addition but native ones did not. Soil sterilization decreased specific leaf area in both native and exotic populations. Negative effects of soil sterilization suggest that soil pathogens may not be as important as soil mutualists for T. sebifera performance. Moreover, interactive effects of sterilization and origin suggest that invasive T. sebifera may have evolved more beneficial relationships with the soil biota. Overall, seedlings from the invasive range outperformed those from the native range, however, an
Benhamou, Simon; Collet, Julien
2015-12-21
The "Lévy Foraging Hypothesis" promotes Lévy walk (LW) as the best strategy to forage for patchily but unpredictably located prey. This strategy mixes extensive and intensive searching phases in a mostly cue-free way through strange, scale-free kinetics. It is however less efficient than a cue-driven two-scale Composite Brownian walk (CBW) when the resources encountered are systematically detected. Nevertheless, it could be assumed that the intrinsic capacity of LW to trigger cue-free intensive searching at random locations might be advantageous when resources are not only scarcely encountered but also so cryptic that the probability to detect those encountered during movement is low. Surprisingly, this situation, which should be quite common in natural environments, has almost never been studied. Only a few studies have considered "saltatory" foragers, which are fully "blind" while moving and thus detect prey only during scanning pauses, but none of them compared the efficiency of LW vs. CBW in this context or in less extreme contexts where the detection probability during movement is not null but very low. In a study based on computer simulations, we filled the bridge between the concepts of "pure continuous" and "pure saltatory" foraging by considering that the probability to detect resources encountered while moving may range from 0 to 1. We showed that regularly stopping to scan the environment can indeed improve efficiency, but only at very low detection probabilities. Furthermore, the LW is then systematically outperformed by a mixed cue-driven/internally-driven CBW. It is thus more likely that evolution tends to favour strategies that rely on environmental feedbacks rather than on strange kinetics. PMID:26463680
Zhang, Ling; Zhang, Yaojun; Wang, Hong; Zou, Jianwen; Siemann, Evan
2013-01-01
Two mechanisms that have been proposed to explain success of invasive plants are unusual biotic interactions, such as enemy release or enhanced mutualisms, and increased resource availability. However, while these mechanisms are usually considered separately, both may be involved in successful invasions. Biotic interactions may be positive or negative and may interact with nutritional resources in determining invasion success. In addition, the effects of different nutrients on invasions may vary. Finally, genetic variation in traits between populations located in introduced versus native ranges may be important for biotic interactions and/or resource use. Here, we investigated the roles of soil biota, resource availability, and plant genetic variation using seedlings of Triadica sebifera in an experiment in the native range (China). We manipulated nitrogen (control or 4 g/m(2)), phosphorus (control or 0.5 g/m(2)), soil biota (untreated or sterilized field soil), and plant origin (4 populations from the invasive range, 4 populations from the native range) in a full factorial experiment. Phosphorus addition increased root, stem, and leaf masses. Leaf mass and height growth depended on population origin and soil sterilization. Invasive populations had higher leaf mass and growth rates than native populations did in fresh soil but they had lower, comparable leaf mass and growth rates in sterilized soil. Invasive populations had higher growth rates with phosphorus addition but native ones did not. Soil sterilization decreased specific leaf area in both native and exotic populations. Negative effects of soil sterilization suggest that soil pathogens may not be as important as soil mutualists for T. sebifera performance. Moreover, interactive effects of sterilization and origin suggest that invasive T. sebifera may have evolved more beneficial relationships with the soil biota. Overall, seedlings from the invasive range outperformed those from the native range, however
Doi, Hideyuki; Takahara, Teruhiko; Minamoto, Toshifumi; Matsuhashi, Saeko; Uchii, Kimiko; Yamanaka, Hiroki
2015-05-01
Environmental DNA (eDNA) has been used to investigate species distributions in aquatic ecosystems. Most of these studies use real-time polymerase chain reaction (PCR) to detect eDNA in water; however, PCR amplification is often inhibited by the presence of organic and inorganic matter. In droplet digital PCR (ddPCR), the sample is partitioned into thousands of nanoliter droplets, and PCR inhibition may be reduced by the detection of the end-point of PCR amplification in each droplet, independent of the amplification efficiency. In addition, real-time PCR reagents can affect PCR amplification and consequently alter detection rates. We compared the effectiveness of ddPCR and real-time PCR using two different PCR reagents for the detection of the eDNA from invasive bluegill sunfish, Lepomis macrochirus, in ponds. We found that ddPCR had higher detection rates of bluegill eDNA in pond water than real-time PCR with either of the PCR reagents, especially at low DNA concentrations. Limits of DNA detection, which were tested by spiking the bluegill DNA to DNA extracts from the ponds containing natural inhibitors, found that ddPCR had higher detection rate than real-time PCR. Our results suggest that ddPCR is more resistant to the presence of PCR inhibitors in field samples than real-time PCR. Thus, ddPCR outperforms real-time PCR methods for detecting eDNA to document species distributions in natural habitats, especially in habitats with high concentrations of PCR inhibitors. PMID:25850372
Refining Gold from Existing Data
Wade, Ted D.
2014-01-01
Purpose of review The purpose is to review the arguments for increasing use of existing data in health research. Recent Findings The reuse of data in observational, exploratory and outcome studies, as well as in confirming other findings, is being justified on epistemological grounds as the major path to new knowledge and to the generalizing of findings to “real world” populations. It is also justified on grounds of cost, power and efficiency, even though data reuse comes with real informatics, scientific culture, societal and regulatory issues. Summary Data reuse is becoming more compelling. When contemplating new research for any purpose other than RCTs for efficacy, it is prudent to consider whether partnership with existing data holders should be part of the optimal research plan. PMID:24625777
The EXIST Mission Concept Study
NASA Technical Reports Server (NTRS)
Fishman, Gerald J.; Grindlay, J.; Hong, J.
2008-01-01
EXIST is a mission designed to find and study black holes (BHs) over a wide range of environments and masses, including: 1) BHs accreting from binary companions or dense molecular clouds throughout our Galaxy and the Local Group, 2) supermassive black holes (SMBHs) lying dormant in galaxies that reveal their existence by disrupting passing stars, and 3) SMBHs that are hidden from our view at lower energies due to obscuration by the gas that they accrete. 4) the birth of stellar mass BHs which is accompanied by long cosmic gamma-ray bursts (GRBs) which are seen several times a day and may be associated with the earliest stars to form in the Universe. EXIST will provide an order of magnitude increase in sensitivity and angular resolution as well as greater spectral resolution and bandwidth compared with earlier hard X-ray survey telescopes. With an onboard optical-infra red (IR) telescope, EXIST will measure the spectra and redshifts of GRBs and their utility as cosmological probes of the highest z universe and epoch of reionization. The mission would retain its primary goal of being the Black Hole Finder Probe in the Beyond Einstein Program. However, the new design for EXIST proposed to be studied here represents a significant advance from its previous incarnation as presented to BEPAC. The mission is now less than half the total mass, would be launched on the smallest EELV available (Atlas V-401) for a Medium Class mission, and most importantly includes a two-telescope complement that is ideally suited for the study of both obscured and very distant BHs. EXIST retains its very wide field hard X-ray imaging High Energy Telescope (HET) as the primary instrument, now with improved angular and spectral resolution, and in a more compact payload that allows occasional rapid slews for immediate optical/IR imaging and spectra of GRBs and AGN as well as enhanced hard X-ray spectra and timing with pointed observations. The mission would conduct a 2 year full sky survey in
Existence of undiscovered Uranian satellites
Boice, D.C.
1986-04-01
Structure in the Uranian ring system as observed in recent occultations may contain indirect evidence for the existence of undiscovered satellites. Using the Alfven and Arrhenius (1975, 1976) scenario for the formation of planetary systems, the orbital radii of up to nine hypothetical satellites interior to Miranda are computed. These calculations should provide interesting comparisons when the results from the Voyager 2 encounter with Uranus are made public. 15 refs., 1 fig., 1 tab.
Cogeneration for existing alfalfa processing
Not Available
1984-01-01
This study is designed to look at the application of gas-turbine generator cogeneration to a typical Nebraska alfalfa processing mill. The practicality is examined of installing a combustion turbine generator at a plant site and modifying existing facilities for generating electricity, utilizing the electricity generated, selling excess electricity to the power company and incorporating the turbine exhaust flow as a drying medium for the alfalfa. The results of this study are not conclusive but the findings are summarized.
Heddle, J A; Cosentino, L; Dawod, G; Swiger, R R; Paashuis-Lew, Y
1996-01-01
Self-renewing tissues have a differentiation hierarchy such that the stem cells are the only permanent residents of the tissue, and it is in these cells that most cancerous mutations arise. The progeny of the stem cells either remain stem cells or enter a transient proliferating cell population that differentiates to produce the functional cells of the tissue. The reason that this differentiation hierarchy exists has not been established. We show here that alternative hierarchies, in which there would be no stem cells, are feasible and biologically plausible. We show that current evidence from somatic mutation frequencies at both transgenic and endogenous loci implicates cell division in the origin of most somatic mutations. We suggest, therefore, that the existence of stem cells is an evolutionary consequence of a selective pressure to avoid cancer by reducing the number of somatic mutations. The stem cell hierarchy reduces the number of cell divisions of those cells that reside permanently in the tissue, which reduces the number of somatic mutations and thus minimizes the cancer rate. In the small intestine, the existence of stem cells reduces the mutant frequency in the stem cells by about one order of magnitude. Since two or more mutations are required to transform a cell, the protective effect may be 100-fold or more. Similar factors may be expected in other tissues. PMID:8991061
Adaptive link selection algorithms for distributed estimation
NASA Astrophysics Data System (ADS)
Xu, Songcen; de Lamare, Rodrigo C.; Poor, H. Vincent
2015-12-01
This paper presents adaptive link selection algorithms for distributed estimation and considers their application to wireless sensor networks and smart grids. In particular, exhaustive search-based least mean squares (LMS) / recursive least squares (RLS) link selection algorithms and sparsity-inspired LMS / RLS link selection algorithms that can exploit the topology of networks with poor-quality links are considered. The proposed link selection algorithms are then analyzed in terms of their stability, steady-state, and tracking performance and computational complexity. In comparison with the existing centralized or distributed estimation strategies, the key features of the proposed algorithms are as follows: (1) more accurate estimates and faster convergence speed can be obtained and (2) the network is equipped with the ability of link selection that can circumvent link failures and improve the estimation performance. The performance of the proposed algorithms for distributed estimation is illustrated via simulations in applications of wireless sensor networks and smart grids.
Newton Algorithms for Analytic Rotation: An Implicit Function Approach
ERIC Educational Resources Information Center
Boik, Robert J.
2008-01-01
In this paper implicit function-based parameterizations for orthogonal and oblique rotation matrices are proposed. The parameterizations are used to construct Newton algorithms for minimizing differentiable rotation criteria applied to "m" factors and "p" variables. The speed of the new algorithms is compared to that of existing algorithms and to…
HEATR project: ATR algorithm parallelization
NASA Astrophysics Data System (ADS)
Deardorf, Catherine E.
1998-09-01
High Performance Computing (HPC) Embedded Application for Target Recognition (HEATR) is a project funded by the High Performance Computing Modernization Office through the Common HPC Software Support Initiative (CHSSI). The goal of CHSSI is to produce portable, parallel, multi-purpose, freely distributable, support software to exploit emerging parallel computing technologies and enable application of scalable HPC's for various critical DoD applications. Specifically, the CHSSI goal for HEATR is to provide portable, parallel versions of several existing ATR detection and classification algorithms to the ATR-user community to achieve near real-time capability. The HEATR project will create parallel versions of existing automatic target recognition (ATR) detection and classification algorithms and generate reusable code that will support porting and software development process for ATR HPC software. The HEATR Team has selected detection/classification algorithms from both the model- based and training-based (template-based) arena in order to consider the parallelization requirements for detection/classification algorithms across ATR technology. This would allow the Team to assess the impact that parallelization would have on detection/classification performance across ATR technology. A field demo is included in this project. Finally, any parallel tools produced to support the project will be refined and returned to the ATR user community along with the parallel ATR algorithms. This paper will review: (1) HPCMP structure as it relates to HEATR, (2) Overall structure of the HEATR project, (3) Preliminary results for the first algorithm Alpha Test, (4) CHSSI requirements for HEATR, and (5) Project management issues and lessons learned.
A Guiding Evolutionary Algorithm with Greedy Strategy for Global Optimization Problems
Cao, Leilei; Xu, Lihong; Goodman, Erik D.
2016-01-01
A Guiding Evolutionary Algorithm (GEA) with greedy strategy for global optimization problems is proposed. Inspired by Particle Swarm Optimization, the Genetic Algorithm, and the Bat Algorithm, the GEA was designed to retain some advantages of each method while avoiding some disadvantages. In contrast to the usual Genetic Algorithm, each individual in GEA is crossed with the current global best one instead of a randomly selected individual. The current best individual served as a guide to attract offspring to its region of genotype space. Mutation was added to offspring according to a dynamic mutation probability. To increase the capability of exploitation, a local search mechanism was applied to new individuals according to a dynamic probability of local search. Experimental results show that GEA outperformed the other three typical global optimization algorithms with which it was compared. PMID:27293421
2015-01-01
The standard artificial bee colony (ABC) algorithm involves exploration and exploitation processes which need to be balanced for enhanced performance. This paper proposes a new modified ABC algorithm named JA-ABC5 to enhance convergence speed and improve the ability to reach the global optimum by balancing exploration and exploitation processes. New stages have been proposed at the earlier stages of the algorithm to increase the exploitation process. Besides that, modified mutation equations have also been introduced in the employed and onlooker-bees phases to balance the two processes. The performance of JA-ABC5 has been analyzed on 27 commonly used benchmark functions and tested to optimize the reactive power optimization problem. The performance results have clearly shown that the newly proposed algorithm has outperformed other compared algorithms in terms of convergence speed and global optimum achievement. PMID:25879054
A Guiding Evolutionary Algorithm with Greedy Strategy for Global Optimization Problems.
Cao, Leilei; Xu, Lihong; Goodman, Erik D
2016-01-01
A Guiding Evolutionary Algorithm (GEA) with greedy strategy for global optimization problems is proposed. Inspired by Particle Swarm Optimization, the Genetic Algorithm, and the Bat Algorithm, the GEA was designed to retain some advantages of each method while avoiding some disadvantages. In contrast to the usual Genetic Algorithm, each individual in GEA is crossed with the current global best one instead of a randomly selected individual. The current best individual served as a guide to attract offspring to its region of genotype space. Mutation was added to offspring according to a dynamic mutation probability. To increase the capability of exploitation, a local search mechanism was applied to new individuals according to a dynamic probability of local search. Experimental results show that GEA outperformed the other three typical global optimization algorithms with which it was compared. PMID:27293421
NASA Astrophysics Data System (ADS)
Goswami, D.; Chakraborty, S.
2014-11-01
Laser machining is a promising non-contact process for effective machining of difficult-to-process advanced engineering materials. Increasing interest in the use of lasers for various machining operations can be attributed to its several unique advantages, like high productivity, non-contact processing, elimination of finishing operations, adaptability to automation, reduced processing cost, improved product quality, greater material utilization, minimum heat-affected zone and green manufacturing. To achieve the best desired machining performance and high quality characteristics of the machined components, it is extremely important to determine the optimal values of the laser machining process parameters. In this paper, fireworks algorithm and cuckoo search (CS) algorithm are applied for single as well as multi-response optimization of two laser machining processes. It is observed that although almost similar solutions are obtained for both these algorithms, CS algorithm outperforms fireworks algorithm with respect to average computation time, convergence rate and performance consistency.
Success on Algorithmic and LOCS vs. Conceptual Chemistry Exam Questions
NASA Astrophysics Data System (ADS)
Zoller, Uri; Lubezky, Aviva; Nakhleh, Mary B.; Tessier, Barbara; Dori, Yehudit J.
1995-11-01
The performance of freshman science, engineering, and in-service teacher students in three Israeli and American universities on algorithmic, lower-order cognitive skills (LOCS), and conceptual chemistry exam questions was investigated. The driving force for the study was an interest in moving chemistry instruction from an algorithm-oriented factual recall approach dominated by LOCS to a decision-making, problem-solving, and critical thinking approach dominated by higher-order cognitive skills (HOCS). Students' responses to the specially designed algorithmic, LOCS, and conceptual exam questions were scored and analyzed for correlations and for differences between the means within and across universities by the question's category. The main findings were: (1) students in all three universities performed consistently on each of the three categories in the order of algorithmic > LOCS > conceptual questions, (2) success on algorithmic does not imply success on conceptual, or even on LOCS questions, and (3) students taught in small classes outperformed by far those in large lecture sessions in all three categories. The implied paradigm shift from an algorithmic/LOCS to a conceptual/HOCS orientation should be moved from a research-based theoretical domain to actual implementation in order for a meaningful improvement of chemistry teaching to occur.
ETD: an extended time delay algorithm for ventricular fibrillation detection.
Kim, Jungyoon; Chu, Chao-Hsien
2014-01-01
Ventricular fibrillation (VF) is the most serious type of heart attack which requires quick detection and first aid to improve patients' survival rates. To be most effective in using wearable devices for VF detection, it is vital that the detection algorithms be accurate, robust, reliable and computationally efficient. Previous studies and our experiments both indicate that the time-delay (TD) algorithm has a high reliability for separating sinus rhythm (SR) from VF and is resistant to variable factors, such as window size and filtering method. However, it fails to detect some VF cases. In this paper, we propose an extended time-delay (ETD) algorithm for VF detection and conduct experiments comparing the performance of ETD against five good VF detection algorithms, including TD, using the popular Creighton University (CU) database. Our study shows that (1) TD and ETD outperform the other four algorithms considered and (2) with the same sensitivity setting, ETD improves upon TD in three other quality measures for up to 7.64% and in terms of aggregate accuracy, the ETD algorithm shows an improvement of 2.6% of the area under curve (AUC) compared to TD. PMID:25571480
Pathway-Dependent Effectiveness of Network Algorithms for Gene Prioritization
Shim, Jung Eun; Hwang, Sohyun; Lee, Insuk
2015-01-01
A network-based approach has proven useful for the identification of novel genes associated with complex phenotypes, including human diseases. Because network-based gene prioritization algorithms are based on propagating information of known phenotype-associated genes through networks, the pathway structure of each phenotype might significantly affect the effectiveness of algorithms. We systematically compared two popular network algorithms with distinct mechanisms – direct neighborhood which propagates information to only direct network neighbors, and network diffusion which diffuses information throughout the entire network – in prioritization of genes for worm and human phenotypes. Previous studies reported that network diffusion generally outperforms direct neighborhood for human diseases. Although prioritization power is generally measured for all ranked genes, only the top candidates are significant for subsequent functional analysis. We found that high prioritizing power of a network algorithm for all genes cannot guarantee successful prioritization of top ranked candidates for a given phenotype. Indeed, the majority of the phenotypes that were more efficiently prioritized by network diffusion showed higher prioritizing power for top candidates by direct neighborhood. We also found that connectivity among pathway genes for each phenotype largely determines which network algorithm is more effective, suggesting that the network algorithm used for each phenotype should be chosen with consideration of pathway gene connectivity. PMID:26091506
Library of Continuation Algorithms
Energy Science and Technology Software Center (ESTSC)
2005-03-01
LOCA (Library of Continuation Algorithms) is scientific software written in C++ that provides advanced analysis tools for nonlinear systems. In particular, it provides parameter continuation algorithms. bifurcation tracking algorithms, and drivers for linear stability analysis. The algorithms are aimed at large-scale applications that use Newtons method for their nonlinear solve.
POSE Algorithms for Automated Docking
NASA Technical Reports Server (NTRS)
Heaton, Andrew F.; Howard, Richard T.
2011-01-01
POSE (relative position and attitude) can be computed in many different ways. Given a sensor that measures bearing to a finite number of spots corresponding to known features (such as a target) of a spacecraft, a number of different algorithms can be used to compute the POSE. NASA has sponsored the development of a flash LIDAR proximity sensor called the Vision Navigation Sensor (VNS) for use by the Orion capsule in future docking missions. This sensor generates data that can be used by a variety of algorithms to compute POSE solutions inside of 15 meters, including at the critical docking range of approximately 1-2 meters. Previously NASA participated in a DARPA program called Orbital Express that achieved the first automated docking for the American space program. During this mission a large set of high quality mated sensor data was obtained at what is essentially the docking distance. This data set is perhaps the most accurate truth data in existence for docking proximity sensors in orbit. In this paper, the flight data from Orbital Express is used to test POSE algorithms at 1.22 meters range. Two different POSE algorithms are tested for two different Fields-of-View (FOVs) and two different pixel noise levels. The results of the analysis are used to predict future performance of the POSE algorithms with VNS data.
Benchmarking image fusion algorithm performance
NASA Astrophysics Data System (ADS)
Howell, Christopher L.
2012-06-01
Registering two images produced by two separate imaging sensors having different detector sizes and fields of view requires one of the images to undergo transformation operations that may cause its overall quality to degrade with regards to visual task performance. This possible change in image quality could add to an already existing difference in measured task performance. Ideally, a fusion algorithm would take as input unaltered outputs from each respective sensor used in the process. Therefore, quantifying how well an image fusion algorithm performs should be base lined to whether the fusion algorithm retained the performance benefit achievable by each independent spectral band being fused. This study investigates an identification perception experiment using a simple and intuitive process for discriminating between image fusion algorithm performances. The results from a classification experiment using information theory based image metrics is presented and compared to perception test results. The results show an effective performance benchmark for image fusion algorithms can be established using human perception test data. Additionally, image metrics have been identified that either agree with or surpass the performance benchmark established.
Geist, G.A.; Howell, G.W.; Watkins, D.S.
1997-11-01
The BR algorithm, a new method for calculating the eigenvalues of an upper Hessenberg matrix, is introduced. It is a bulge-chasing algorithm like the QR algorithm, but, unlike the QR algorithm, it is well adapted to computing the eigenvalues of the narrowband, nearly tridiagonal matrices generated by the look-ahead Lanczos process. This paper describes the BR algorithm and gives numerical evidence that it works well in conjunction with the Lanczos process. On the biggest problems run so far, the BR algorithm beats the QR algorithm by a factor of 30--60 in computing time and a factor of over 100 in matrix storage space.