Sample records for algorithm produces results

  1. Comparison of Two Phenotypic Algorithms To Detect Carbapenemase-Producing Enterobacteriaceae

    PubMed Central

    Dortet, Laurent; Bernabeu, Sandrine; Gonzalez, Camille

    2017-01-01

    ABSTRACT A novel algorithm designed for the screening of carbapenemase-producing Enterobacteriaceae (CPE), based on faropenem and temocillin disks, was compared to that of the Committee of the Antibiogram of the French Society of Microbiology (CA-SFM), which is based on ticarcillin-clavulanate, imipenem, and temocillin disks. The two algorithms presented comparable negative predictive values (98.6% versus 97.5%) for CPE screening among carbapenem-nonsusceptible Enterobacteriaceae. However, since 46.2% (n = 49) of the CPE were correctly identified as OXA-48-like producers by the faropenem/temocillin-based algorithm, it significantly decreased the number of complementary tests needed (42.2% versus 62.6% with the CA-SFM algorithm). PMID:28607010

  2. New Results in Astrodynamics Using Genetic Algorithms

    NASA Technical Reports Server (NTRS)

    Coverstone-Carroll, V.; Hartmann, J. W.; Williams, S. N.; Mason, W. J.

    1998-01-01

    Generic algorithms have gained popularity as an effective procedure for obtaining solutions to traditionally difficult space mission optimization problems. In this paper, a brief survey of the use of genetic algorithms to solve astrodynamics problems is presented and is followed by new results obtained from applying a Pareto genetic algorithm to the optimization of low-thrust interplanetary spacecraft missions.

  3. Convergence Results on Iteration Algorithms to Linear Systems

    PubMed Central

    Wang, Zhuande; Yang, Chuansheng; Yuan, Yubo

    2014-01-01

    In order to solve the large scale linear systems, backward and Jacobi iteration algorithms are employed. The convergence is the most important issue. In this paper, a unified backward iterative matrix is proposed. It shows that some well-known iterative algorithms can be deduced with it. The most important result is that the convergence results have been proved. Firstly, the spectral radius of the Jacobi iterative matrix is positive and the one of backward iterative matrix is strongly positive (lager than a positive constant). Secondly, the mentioned two iterations have the same convergence results (convergence or divergence simultaneously). Finally, some numerical experiments show that the proposed algorithms are correct and have the merit of backward methods. PMID:24991640

  4. Analysis of an Optimized MLOS Tomographic Reconstruction Algorithm and Comparison to the MART Reconstruction Algorithm

    NASA Astrophysics Data System (ADS)

    La Foy, Roderick; Vlachos, Pavlos

    2011-11-01

    An optimally designed MLOS tomographic reconstruction algorithm for use in 3D PIV and PTV applications is analyzed. Using a set of optimized reconstruction parameters, the reconstructions produced by the MLOS algorithm are shown to be comparable to reconstructions produced by the MART algorithm for a range of camera geometries, camera numbers, and particle seeding densities. The resultant velocity field error calculated using PIV and PTV algorithms is further minimized by applying both pre and post processing to the reconstructed data sets.

  5. Agricultural produce grading and sorting system using color CCD and new color identification algorithm

    NASA Astrophysics Data System (ADS)

    Wang, Dongsheng; Zou, Jizuo; Yang, Yunping; Dong, Jianhua; Zhang, Yuanxiang

    1996-10-01

    A high-speed automatic agricultural produce grading and sorting system using color CCD and new color identification algorithm has been developed. In a typical application, the system can sort almonds into tow output grades according to their color. Almonds ar rich in 18 kinds of amino acids and 13 kinds of micro minerals and vitamins and can be made into almond drink. In order to ensure the drink quality, almonds must be sorted carefully before being made into a drink. Using this system, almonds can be sorted into two grades: up to grade and below grade almonds or foreign materials. A color CCD inspects the almonds passing on a conveyor of rotating rollers, a color identification algorithm grades almonds and distinguishes foreign materials from almonds. Employing an elaborately designed mechanism, the below grade almonds and foreign materials can be removed effectively from the raw almonds. This system can be easily adapted for inspecting and sorting other kinds of agricultural produce such as peanuts, beans tomatoes and so on.

  6. Motion Cueing Algorithm Development: Initial Investigation and Redesign of the Algorithms

    NASA Technical Reports Server (NTRS)

    Telban, Robert J.; Wu, Weimin; Cardullo, Frank M.; Houck, Jacob A. (Technical Monitor)

    2000-01-01

    In this project four motion cueing algorithms were initially investigated. The classical algorithm generated results with large distortion and delay and low magnitude. The NASA adaptive algorithm proved to be well tuned with satisfactory performance, while the UTIAS adaptive algorithm produced less desirable results. Modifications were made to the adaptive algorithms to reduce the magnitude of undesirable spikes. The optimal algorithm was found to have the potential for improved performance with further redesign. The center of simulator rotation was redefined. More terms were added to the cost function to enable more tuning flexibility. A new design approach using a Fortran/Matlab/Simulink setup was employed. A new semicircular canals model was incorporated in the algorithm. With these changes results show the optimal algorithm has some advantages over the NASA adaptive algorithm. Two general problems observed in the initial investigation required solutions. A nonlinear gain algorithm was developed that scales the aircraft inputs by a third-order polynomial, maximizing the motion cues while remaining within the operational limits of the motion system. A braking algorithm was developed to bring the simulator to a full stop at its motion limit and later release the brake to follow the cueing algorithm output.

  7. An Algorithm Approach to Determining Smoking Cessation Treatment for Persons Living with HIV/AIDS: Results of a Pilot Trial

    PubMed Central

    Cropsey, Karen L.; Jardin, Bianca; Burkholder, Greer; Clark, C. Brendan; Raper, James L.; Saag, Michael

    2015-01-01

    Background Smoking now represents one of the biggest modifiable risk factors for disease and mortality in PLHIV. To produce significant changes in smoking rates among this population, treatments will need to be both acceptable to the larger segment of PLHIV smokers as well as feasible to implement in busy HIV clinics. The purpose of this study was to evaluate the feasibility and effects of a novel proactive algorithm-based intervention in an HIV/AIDS clinic. Methods PLHIV smokers (N =100) were proactively identified via their electronic medical records and were subsequently randomized at baseline to receive a 12-week pharmacotherapy-based algorithm treatment or treatment as usual. Participants were tracked in-person for 12-weeks. Participants provided information on smoking behaviors and associated constructs of cessation at each follow-up session. Results The findings revealed that many smokers reported utilizing prescribed medications when provided with a supply of cessation medication as determined by an algorithm. Compared to smokers receiving treatment as usual, PLHIV smokers prescribed these medications reported more quit attempts and greater reduction in smoking. Proxy measures of cessation readiness (e.g., motivation, self-efficacy) also favored participants receiving algorithm treatment. Conclusions This algorithm-derived treatment produced positive changes across a number of important clinical markers associated with smoking cessation. Given these promising findings coupled with the brief nature of this treatment, the overall pattern of results suggests strong potential for dissemination into clinical settings as well as significant promise for further advancing clinical health outcomes in this population. PMID:26181705

  8. Initial Results from Radiometer and Polarized Radar-Based Icing Algorithms Compared to In-Situ Data

    NASA Technical Reports Server (NTRS)

    Serke, David; Reehorst, Andrew L.; King, Michael

    2015-01-01

    In early 2015, a field campaign was conducted at the NASA Glenn Research Center in Cleveland, Ohio, USA. The purpose of the campaign is to test several prototype algorithms meant to detect the location and severity of in-flight icing (or icing aloft, as opposed to ground icing) within the terminal airspace. Terminal airspace for this project is currently defined as within 25 kilometers horizontal distance of the terminal, which in this instance is Hopkins International Airport in Cleveland. Two new and improved algorithms that utilize ground-based remote sensing instrumentation have been developed and were operated during the field campaign. The first is the 'NASA Icing Remote Sensing System', or NIRSS. The second algorithm is the 'Radar Icing Algorithm', or RadIA. In addition to these algorithms, which were derived from ground-based remote sensors, in-situ icing measurements of the profiles of super-cooled liquid water (SLW) collected with vibrating wire sondes attached to weather balloons produced a comprehensive database for comparison. Key fields from the SLW-sondes include air temperature, humidity and liquid water content, cataloged by time and 3-D location. This work gives an overview of the NIRSS and RadIA products and results are compared to in-situ SLW-sonde data from one icing case study. The location and quantity of super-cooled liquid as measured by the in-situ probes provide a measure of the utility of these prototype hazard-sensing algorithms.

  9. Mining the National Career Assessment Examination Result Using Clustering Algorithm

    NASA Astrophysics Data System (ADS)

    Pagudpud, M. V.; Palaoag, T. T.; Padirayon, L. M.

    2018-03-01

    Education is an essential process today which elicits authorities to discover and establish innovative strategies for educational improvement. This study applied data mining using clustering technique for knowledge extraction from the National Career Assessment Examination (NCAE) result in the Division of Quirino. The NCAE is an examination given to all grade 9 students in the Philippines to assess their aptitudes in the different domains. Clustering the students is helpful in identifying students’ learning considerations. With the use of the RapidMiner tool, clustering algorithms such as Density-Based Spatial Clustering of Applications with Noise (DBSCAN), k-means, k-medoid, expectation maximization clustering, and support vector clustering algorithms were analyzed. The silhouette indexes of the said clustering algorithms were compared, and the result showed that the k-means algorithm with k = 3 and silhouette index equal to 0.196 is the most appropriate clustering algorithm to group the students. Three groups were formed having 477 students in the determined group (cluster 0), 310 proficient students (cluster 1) and 396 developing students (cluster 2). The data mining technique used in this study is essential in extracting useful information from the NCAE result to better understand the abilities of students which in turn is a good basis for adopting teaching strategies.

  10. Accurate Finite Difference Algorithms

    NASA Technical Reports Server (NTRS)

    Goodrich, John W.

    1996-01-01

    Two families of finite difference algorithms for computational aeroacoustics are presented and compared. All of the algorithms are single step explicit methods, they have the same order of accuracy in both space and time, with examples up to eleventh order, and they have multidimensional extensions. One of the algorithm families has spectral like high resolution. Propagation with high order and high resolution algorithms can produce accurate results after O(10(exp 6)) periods of propagation with eight grid points per wavelength.

  11. Genetic Algorithms and Local Search

    NASA Technical Reports Server (NTRS)

    Whitley, Darrell

    1996-01-01

    The first part of this presentation is a tutorial level introduction to the principles of genetic search and models of simple genetic algorithms. The second half covers the combination of genetic algorithms with local search methods to produce hybrid genetic algorithms. Hybrid algorithms can be modeled within the existing theoretical framework developed for simple genetic algorithms. An application of a hybrid to geometric model matching is given. The hybrid algorithm yields results that improve on the current state-of-the-art for this problem.

  12. CURE-SMOTE algorithm and hybrid algorithm for feature selection and parameter optimization based on random forests.

    PubMed

    Ma, Li; Fan, Suohai

    2017-03-14

    The random forests algorithm is a type of classifier with prominent universality, a wide application range, and robustness for avoiding overfitting. But there are still some drawbacks to random forests. Therefore, to improve the performance of random forests, this paper seeks to improve imbalanced data processing, feature selection and parameter optimization. We propose the CURE-SMOTE algorithm for the imbalanced data classification problem. Experiments on imbalanced UCI data reveal that the combination of Clustering Using Representatives (CURE) enhances the original synthetic minority oversampling technique (SMOTE) algorithms effectively compared with the classification results on the original data using random sampling, Borderline-SMOTE1, safe-level SMOTE, C-SMOTE, and k-means-SMOTE. Additionally, the hybrid RF (random forests) algorithm has been proposed for feature selection and parameter optimization, which uses the minimum out of bag (OOB) data error as its objective function. Simulation results on binary and higher-dimensional data indicate that the proposed hybrid RF algorithms, hybrid genetic-random forests algorithm, hybrid particle swarm-random forests algorithm and hybrid fish swarm-random forests algorithm can achieve the minimum OOB error and show the best generalization ability. The training set produced from the proposed CURE-SMOTE algorithm is closer to the original data distribution because it contains minimal noise. Thus, better classification results are produced from this feasible and effective algorithm. Moreover, the hybrid algorithm's F-value, G-mean, AUC and OOB scores demonstrate that they surpass the performance of the original RF algorithm. Hence, this hybrid algorithm provides a new way to perform feature selection and parameter optimization.

  13. Fusing face-verification algorithms and humans.

    PubMed

    O'Toole, Alice J; Abdi, Hervé; Jiang, Fang; Phillips, P Jonathon

    2007-10-01

    It has been demonstrated recently that state-of-the-art face-recognition algorithms can surpass human accuracy at matching faces over changes in illumination. The ranking of algorithms and humans by accuracy, however, does not provide information about whether algorithms and humans perform the task comparably or whether algorithms and humans can be fused to improve performance. In this paper, we fused humans and algorithms using partial least square regression (PLSR). In the first experiment, we applied PLSR to face-pair similarity scores generated by seven algorithms participating in the Face Recognition Grand Challenge. The PLSR produced an optimal weighting of the similarity scores, which we tested for generality with a jackknife procedure. Fusing the algorithms' similarity scores using the optimal weights produced a twofold reduction of error rate over the most accurate algorithm. Next, human-subject-generated similarity scores were added to the PLSR analysis. Fusing humans and algorithms increased the performance to near-perfect classification accuracy. These results are discussed in terms of maximizing face-verification accuracy with hybrid systems consisting of multiple algorithms and humans.

  14. Adaptively resizing populations: Algorithm, analysis, and first results

    NASA Technical Reports Server (NTRS)

    Smith, Robert E.; Smuda, Ellen

    1993-01-01

    Deciding on an appropriate population size for a given Genetic Algorithm (GA) application can often be critical to the algorithm's success. Too small, and the GA can fall victim to sampling error, affecting the efficacy of its search. Too large, and the GA wastes computational resources. Although advice exists for sizing GA populations, much of this advice involves theoretical aspects that are not accessible to the novice user. An algorithm for adaptively resizing GA populations is suggested. This algorithm is based on recent theoretical developments that relate population size to schema fitness variance. The suggested algorithm is developed theoretically, and simulated with expected value equations. The algorithm is then tested on a problem where population sizing can mislead the GA. The work presented suggests that the population sizing algorithm may be a viable way to eliminate the population sizing decision from the application of GA's.

  15. Evaluation of observation-driven evaporation algorithms: results of the WACMOS-ET project

    NASA Astrophysics Data System (ADS)

    Miralles, Diego G.; Jimenez, Carlos; Ershadi, Ali; McCabe, Matthew F.; Michel, Dominik; Hirschi, Martin; Seneviratne, Sonia I.; Jung, Martin; Wood, Eric F.; (Bob) Su, Z.; Timmermans, Joris; Chen, Xuelong; Fisher, Joshua B.; Mu, Quiaozen; Fernandez, Diego

    2015-04-01

    Terrestrial evaporation (ET) links the continental water, energy and carbon cycles. Understanding the magnitude and variability of ET at the global scale is an essential step towards reducing uncertainties in our projections of climatic conditions and water availability for the future. However, the requirement of global observational data of ET can neither be satisfied with our sparse global in-situ networks, nor with the existing satellite sensors (which cannot measure evaporation directly from space). This situation has led to the recent rise of several algorithms dedicated to deriving ET fields from satellite data indirectly, based on the combination of ET-drivers that can be observed from space (e.g. radiation, temperature, phenological variability, water content, etc.). These algorithms can either be based on physics (e.g. Priestley and Taylor or Penman-Monteith approaches) or be purely statistical (e.g., machine learning). However, and despite the efforts from different initiatives like GEWEX LandFlux (Jimenez et al., 2011; Mueller et al., 2013), the uncertainties inherent in the resulting global ET datasets remain largely unexplored, partly due to a lack of inter-product consistency in forcing data. In response to this need, the ESA WACMOS-ET project started in 2012 with the main objectives of (a) developing a Reference Input Data Set to derive and validate ET estimates, and (b) performing a cross-comparison, error characterization and validation exercise of a group of selected ET algorithms driven by this Reference Input Data Set and by in-situ forcing data. The algorithms tested are SEBS (Su et al., 2002), the Penman- Monteith approach from MODIS (Mu et al., 2011), the Priestley and Taylor JPL model (Fisher et al., 2008), the MPI-MTE model (Jung et al., 2010) and GLEAM (Miralles et al., 2011). In this presentation we will show the first results from the ESA WACMOS-ET project. The performance of the different algorithms at multiple spatial and temporal

  16. Unified treatment algorithm for the management of crotaline snakebite in the United States: results of an evidence-informed consensus workshop

    PubMed Central

    2011-01-01

    Background Envenomation by crotaline snakes (rattlesnake, cottonmouth, copperhead) is a complex, potentially lethal condition affecting thousands of people in the United States each year. Treatment of crotaline envenomation is not standardized, and significant variation in practice exists. Methods A geographically diverse panel of experts was convened for the purpose of deriving an evidence-informed unified treatment algorithm. Research staff analyzed the extant medical literature and performed targeted analyses of existing databases to inform specific clinical decisions. A trained external facilitator used modified Delphi and structured consensus methodology to achieve consensus on the final treatment algorithm. Results A unified treatment algorithm was produced and endorsed by all nine expert panel members. This algorithm provides guidance about clinical and laboratory observations, indications for and dosing of antivenom, adjunctive therapies, post-stabilization care, and management of complications from envenomation and therapy. Conclusions Clinical manifestations and ideal treatment of crotaline snakebite differ greatly, and can result in severe complications. Using a modified Delphi method, we provide evidence-informed treatment guidelines in an attempt to reduce variation in care and possibly improve clinical outcomes. PMID:21291549

  17. Comparative Results of AIRS AMSU and CrIS/ATMS Retrievals Using a Scientifically Equivalent Retrieval Algorithm

    NASA Technical Reports Server (NTRS)

    Susskind, Joel; Kouvaris, Louis; Iredell, Lena

    2016-01-01

    The AIRS Science Team Version 6 retrieval algorithm is currently producing high quality level-3 Climate Data Records (CDRs) from AIRSAMSU which are critical for understanding climate processes. The AIRS Science Team is finalizing an improved Version-7 retrieval algorithm to reprocess all old and future AIRS data. AIRS CDRs should eventually cover the period September 2002 through at least 2020. CrISATMS is the only scheduled follow on to AIRSAMSU. The objective of this research is to prepare for generation of a long term CrISATMS level-3 data using a finalized retrieval algorithm that is scientifically equivalent to AIRSAMSU Version-7.

  18. Improving the Interpretability of Classification Rules Discovered by an Ant Colony Algorithm: Extended Results.

    PubMed

    Otero, Fernando E B; Freitas, Alex A

    2016-01-01

    Most ant colony optimization (ACO) algorithms for inducing classification rules use a ACO-based procedure to create a rule in a one-at-a-time fashion. An improved search strategy has been proposed in the cAnt-Miner[Formula: see text] algorithm, where an ACO-based procedure is used to create a complete list of rules (ordered rules), i.e., the ACO search is guided by the quality of a list of rules instead of an individual rule. In this paper we propose an extension of the cAnt-Miner[Formula: see text] algorithm to discover a set of rules (unordered rules). The main motivations for this work are to improve the interpretation of individual rules by discovering a set of rules and to evaluate the impact on the predictive accuracy of the algorithm. We also propose a new measure to evaluate the interpretability of the discovered rules to mitigate the fact that the commonly used model size measure ignores how the rules are used to make a class prediction. Comparisons with state-of-the-art rule induction algorithms, support vector machines, and the cAnt-Miner[Formula: see text] producing ordered rules are also presented.

  19. Image-Data Compression Using Edge-Optimizing Algorithm for WFA Inference.

    ERIC Educational Resources Information Center

    Culik, Karel II; Kari, Jarkko

    1994-01-01

    Presents an inference algorithm that produces a weighted finite automata (WFA), in particular, the grayness functions of graytone images. Image-data compression results based on the new inference algorithm produces a WFA with a relatively small number of edges. Image-data compression results alone and in combination with wavelets are discussed.…

  20. Comparative Results of AIRS/AMSU and CrIS/ATMS Retrievals Using a Scientifically Equivalent Retrieval Algorithm

    NASA Technical Reports Server (NTRS)

    Susskind, Joel; Kouvaris, Louis; Iredell, Lena

    2016-01-01

    The AIRS Science Team Version-6 retrieval algorithm is currently producing high quality level-3 Climate Data Records (CDRs) from AIRS/AMSU which are critical for understanding climate processes. The AIRS Science Team is finalizing an improved Version-7 retrieval algorithm to reprocess all old and future AIRS data. AIRS CDRs should eventually cover the period September 2002 through at least 2020. CrIS/ATMS is the only scheduled follow on to AIRS/AMSU. The objective of this research is to prepare for generation of long term CrIS/ATMS CDRs using a retrieval algorithm that is scientifically equivalent to AIRS/AMSU Version-7.

  1. A simple scoring algorithm predicting extended-spectrum β-lactamase producers in adults with community-onset monomicrobial Enterobacteriaceae bacteremia: Matters of frequent emergency department users.

    PubMed

    Lee, Chung-Hsun; Chu, Feng-Yuan; Hsieh, Chih-Chia; Hong, Ming-Yuan; Chi, Chih-Hsien; Ko, Wen-Chien; Lee, Ching-Chi

    2017-04-01

    The incidence of community-onset bacteremia caused by extended-spectrum-β-lactamase (ESBL) producers is increasing. The adverse effects of ESBL production on patient outcome have been recognized and this antimicrobial resistance has significant implications in the delay of appropriate therapy. However, a simple scoring algorithm that can easily, inexpensively, and accurately be applied to clinical settings was lacking. Thus, we established a predictive scoring algorithm for identifying patients at the risk of ESBL-producer infections among patients with community-onset monomicrobial Enterobacteriaceae bacteremia (CoMEB).In a retrospective cohort, multicenter study, adults with CoMEB in the emergency department (ED) were recruited during January 2008 to December 2013. ESBL producers were determined based on ESBL phenotype. Clinical information was obtained from chart records.Of the total 1141 adults with CoMEB, 65 (5.7%) caused by ESBL producers were identified. Four independent multivariate predictors of ESBL-producer bacteremia with high odds ratios (ORs)-recent antimicrobial use (OR, 15.29), recent invasive procedures (OR, 12.33), nursing home residents (OR, 27.77), and frequent ED user (OR, 9.98)-were each assigned +1 point to obtain the CoMEB-ESBL score. Using the proposed scoring algorithm, a cut-off value of +2 yielded a high sensitivity (84.6%) and an acceptable specificity (92.5%); the area under the receiver operating characteristic curve was 0.92.In conclusion, this simple scoring algorithm can be used to identify CoMEB patients with a high ESBL-producer infection risk. Of note, frequent ED user was firstly demonstrated to be a crucial predictor in predicting ESBL-producer infections. ED clinicians should consider adequate empirical therapy with coverage of these pathogens for patients with risk factors.

  2. Algorithms and Complexity Results for Genome Mapping Problems.

    PubMed

    Rajaraman, Ashok; Zanetti, Joao Paulo Pereira; Manuch, Jan; Chauve, Cedric

    2017-01-01

    Genome mapping algorithms aim at computing an ordering of a set of genomic markers based on local ordering information such as adjacencies and intervals of markers. In most genome mapping models, markers are assumed to occur uniquely in the resulting map. We introduce algorithmic questions that consider repeats, i.e., markers that can have several occurrences in the resulting map. We show that, provided with an upper bound on the copy number of repeated markers and with intervals that span full repeat copies, called repeat spanning intervals, the problem of deciding if a set of adjacencies and repeat spanning intervals admits a genome representation is tractable if the target genome can contain linear and/or circular chromosomal fragments. We also show that extracting a maximum cardinality or weight subset of repeat spanning intervals given a set of adjacencies that admits a genome realization is NP-hard but fixed-parameter tractable in the maximum copy number and the number of adjacent repeats, and tractable if intervals contain a single repeated marker.

  3. Trigram-based algorithms for OCR result correction

    NASA Astrophysics Data System (ADS)

    Bulatov, Konstantin; Manzhikov, Temudzhin; Slavin, Oleg; Faradjev, Igor; Janiszewski, Igor

    2017-03-01

    In this paper we consider a task of improving optical character recognition (OCR) results of document fields on low-quality and average-quality images using N-gram models. Cyrillic fields of Russian Federation internal passport are analyzed as an example. Two approaches are presented: the first one is based on hypothesis of dependence of a symbol from two adjacent symbols and the second is based on calculation of marginal distributions and Bayesian networks computation. A comparison of the algorithms and experimental results within a real document OCR system are presented, it's showed that the document field OCR accuracy can be improved by more than 6% for low-quality images.

  4. Testing block subdivision algorithms on block designs

    NASA Astrophysics Data System (ADS)

    Wiseman, Natalie; Patterson, Zachary

    2016-01-01

    Integrated land use-transportation models predict future transportation demand taking into account how households and firms arrange themselves partly as a function of the transportation system. Recent integrated models require parcels as inputs and produce household and employment predictions at the parcel scale. Block subdivision algorithms automatically generate parcel patterns within blocks. Evaluating block subdivision algorithms is done by way of generating parcels and comparing them to those in a parcel database. Three block subdivision algorithms are evaluated on how closely they reproduce parcels of different block types found in a parcel database from Montreal, Canada. While the authors who developed each of the algorithms have evaluated them, they have used their own metrics and block types to evaluate their own algorithms. This makes it difficult to compare their strengths and weaknesses. The contribution of this paper is in resolving this difficulty with the aim of finding a better algorithm suited to subdividing each block type. The proposed hypothesis is that given the different approaches that block subdivision algorithms take, it's likely that different algorithms are better adapted to subdividing different block types. To test this, a standardized block type classification is used that consists of mutually exclusive and comprehensive categories. A statistical method is used for finding a better algorithm and the probability it will perform well for a given block type. Results suggest the oriented bounding box algorithm performs better for warped non-uniform sites, as well as gridiron and fragmented uniform sites. It also produces more similar parcel areas and widths. The Generalized Parcel Divider 1 algorithm performs better for gridiron non-uniform sites. The Straight Skeleton algorithm performs better for loop and lollipop networks as well as fragmented non-uniform and warped uniform sites. It also produces more similar parcel shapes and patterns.

  5. Automatic control algorithm effects on energy production

    NASA Technical Reports Server (NTRS)

    Mcnerney, G. M.

    1981-01-01

    A computer model was developed using actual wind time series and turbine performance data to simulate the power produced by the Sandia 17-m VAWT operating in automatic control. The model was used to investigate the influence of starting algorithms on annual energy production. The results indicate that, depending on turbine and local wind characteristics, a bad choice of a control algorithm can significantly reduce overall energy production. The model can be used to select control algorithms and threshold parameters that maximize long term energy production. The results from local site and turbine characteristics were generalized to obtain general guidelines for control algorithm design.

  6. Hue-preserving and saturation-improved color histogram equalization algorithm.

    PubMed

    Song, Ki Sun; Kang, Hee; Kang, Moon Gi

    2016-06-01

    In this paper, an algorithm is proposed to improve contrast and saturation without color degradation. The local histogram equalization (HE) method offers better performance than the global HE method, whereas the local HE method sometimes produces undesirable results due to the block-based processing. The proposed contrast-enhancement (CE) algorithm reflects the characteristics of the global HE method in the local HE method to avoid the artifacts, while global and local contrasts are enhanced. There are two ways to apply the proposed CE algorithm to color images. One is luminance processing methods, and the other one is each channel processing methods. However, these ways incur excessive or reduced saturation and color degradation problems. The proposed algorithm solves these problems by using channel adaptive equalization and similarity of ratios between the channels. Experimental results show that the proposed algorithm enhances contrast and saturation while preserving the hue and producing better performance than existing methods in terms of objective evaluation metrics.

  7. Description of a Normal-Force In-Situ Turbulence Algorithm for Airplanes

    NASA Technical Reports Server (NTRS)

    Stewart, Eric C.

    2003-01-01

    A normal-force in-situ turbulence algorithm for potential use on commercial airliners is described. The algorithm can produce information that can be used to predict hazardous accelerations of airplanes or to aid meteorologists in forecasting weather patterns. The algorithm uses normal acceleration and other measures of the airplane state to approximate the vertical gust velocity. That is, the fundamental, yet simple, relationship between normal acceleration and the change in normal force coefficient is exploited to produce an estimate of the vertical gust velocity. This simple approach is robust and produces a time history of the vertical gust velocity that would be intuitively useful to pilots. With proper processing, the time history can be transformed into the eddy dissipation rate that would be useful to meteorologists. Flight data for a simplified research implementation of the algorithm are presented for a severe turbulence encounter of the NASA ARIES Boeing 757 research airplane. The results indicate that the algorithm has potential for producing accurate in-situ turbulence measurements. However, more extensive tests and analysis are needed with an operational implementation of the algorithm to make comparisons with other algorithms or methods.

  8. A Scheduling Algorithm for Replicated Real-Time Tasks

    NASA Technical Reports Server (NTRS)

    Yu, Albert C.; Lin, Kwei-Jay

    1991-01-01

    We present an algorithm for scheduling real-time periodic tasks on a multiprocessor system under fault-tolerant requirement. Our approach incorporates both the redundancy and masking technique and the imprecise computation model. Since the tasks in hard real-time systems have stringent timing constraints, the redundancy and masking technique are more appropriate than the rollback techniques which usually require extra time for error recovery. The imprecise computation model provides flexible functionality by trading off the quality of the result produced by a task with the amount of processing time required to produce it. It therefore permits the performance of a real-time system to degrade gracefully. We evaluate the algorithm by stochastic analysis and Monte Carlo simulations. The results show that the algorithm is resilient under hardware failures.

  9. Algorithms for detecting antibodies to HIV-1: results from a rural Ugandan cohort.

    PubMed

    Nunn, A J; Biryahwaho, B; Downing, R G; van der Groen, G; Ojwiya, A; Mulder, D W

    1993-08-01

    To evaluate an algorithm using two enzyme immunoassays (EIA) for anti-HIV-1 antibodies in a rural African population and to assess alternative simplified algorithms. Sera obtained from 7895 individuals in a rural population survey were tested using an algorithm based on two different EIA systems: Recombigen HIV-1 EIA and Wellcozyme HIV-1 Recombinant. Alternative algorithms were assessed using negative or confirmed positive sera. None of the 227 sera classified as unequivocably negative by the two assays were positive by Western blot. Of 192 sera unequivocably positive by both assays, four were seronegative by Western blot. The possibility of technical error cannot be ruled out in three of these. One of the alternative algorithms assessed classified all borderline or discordant assay results as negative had a specificity of 100% and a sensitivity of 98.4%. The cost of this algorithm is one-third that of the conventional algorithm. Our evaluation suggests that high specificity and sensitivity can be obtained without using Western blot and at a considerable reduction in cost.

  10. Algorithms for monitoring warfarin use: Results from Delphi Method.

    PubMed

    Kano, Eunice Kazue; Borges, Jessica Bassani; Scomparini, Erika Burim; Curi, Ana Paula; Ribeiro, Eliane

    2017-10-01

    Warfarin stands as the most prescribed oral anticoagulant. New oral anticoagulants have been approved recently; however, their use is limited and the reversibility techniques of the anticoagulation effect are little known. Thus, our study's purpose was to develop algorithms for therapeutic monitoring of patients taking warfarin based on the opinion of physicians who prescribe this medicine in their clinical practice. The development of the algorithm was performed in two stages, namely: (i) literature review and (ii) algorithm evaluation by physicians using a Delphi Method. Based on the articles analyzed, two algorithms were developed: "Recommendations for the use of warfarin in anticoagulation therapy" and "Recommendations for the use of warfarin in anticoagulation therapy: dose adjustment and bleeding control." Later, these algorithms were analyzed by 19 medical doctors that responded to the invitation and agreed to participate in the study. Of these, 16 responded to the first round, 11 to the second and eight to the third round. A 70% consensus or higher was reached for most issues and at least 50% for six questions. We were able to develop algorithms to monitor the use of warfarin by physicians using a Delphi Method. The proposed method is inexpensive and involves the participation of specialists, and it has proved adequate for the intended purpose. Further studies are needed to validate these algorithms, enabling them to be used in clinical practice.

  11. Algorithms for optimization of branching gravity-driven water networks

    NASA Astrophysics Data System (ADS)

    Dardani, Ian; Jones, Gerard F.

    2018-05-01

    The design of a water network involves the selection of pipe diameters that satisfy pressure and flow requirements while considering cost. A variety of design approaches can be used to optimize for hydraulic performance or reduce costs. To help designers select an appropriate approach in the context of gravity-driven water networks (GDWNs), this work assesses three cost-minimization algorithms on six moderate-scale GDWN test cases. Two algorithms, a backtracking algorithm and a genetic algorithm, use a set of discrete pipe diameters, while a new calculus-based algorithm produces a continuous-diameter solution which is mapped onto a discrete-diameter set. The backtracking algorithm finds the global optimum for all but the largest of cases tested, for which its long runtime makes it an infeasible option. The calculus-based algorithm's discrete-diameter solution produced slightly higher-cost results but was more scalable to larger network cases. Furthermore, the new calculus-based algorithm's continuous-diameter and mapped solutions provided lower and upper bounds, respectively, on the discrete-diameter global optimum cost, where the mapped solutions were typically within one diameter size of the global optimum. The genetic algorithm produced solutions even closer to the global optimum with consistently short run times, although slightly higher solution costs were seen for the larger network cases tested. The results of this study highlight the advantages and weaknesses of each GDWN design method including closeness to the global optimum, the ability to prune the solution space of infeasible and suboptimal candidates without missing the global optimum, and algorithm run time. We also extend an existing closed-form model of Jones (2011) to include minor losses and a more comprehensive two-part cost model, which realistically applies to pipe sizes that span a broad range typical of GDWNs of interest in this work, and for smooth and commercial steel roughness values.

  12. Fast and accurate image recognition algorithms for fresh produce food safety sensing

    NASA Astrophysics Data System (ADS)

    Yang, Chun-Chieh; Kim, Moon S.; Chao, Kuanglin; Kang, Sukwon; Lefcourt, Alan M.

    2011-06-01

    This research developed and evaluated the multispectral algorithms derived from hyperspectral line-scan fluorescence imaging under violet LED excitation for detection of fecal contamination on Golden Delicious apples. The algorithms utilized the fluorescence intensities at four wavebands, 680 nm, 684 nm, 720 nm, and 780 nm, for computation of simple functions for effective detection of contamination spots created on the apple surfaces using four concentrations of aqueous fecal dilutions. The algorithms detected more than 99% of the fecal spots. The effective detection of feces showed that a simple multispectral fluorescence imaging algorithm based on violet LED excitation may be appropriate to detect fecal contamination on fast-speed apple processing lines.

  13. Surface reflectance retrieval from satellite and aircraft sensors - Results of sensors and algorithm comparisons during FIFE

    NASA Technical Reports Server (NTRS)

    Markham, B. L.; Halthore, R. N.; Goetz, S. J.

    1992-01-01

    Visible to shortwave infrared radiometric data collected by a number of remote sensing instruments on aircraft and satellite platforms were compared over common areas in the First International Satellite Land Surface Climatology Project (ISLSCP) Field Experiment (FIFE) site on August 4, 1989, to assess their radiometric consistency and the adequacy of atmospheric correction algorithms. The instruments in the study included the Landsat 5 Thematic Mapper (TM), the SPOT 1 high-resolution visible (HRV) 1 sensor, the NS001 Thematic Mapper simulator, and the modular multispectral radiometers (MMRs). Atmospheric correction routines analyzed were an algorithm developed for FIFE, LOWTRAN 7, and 5S. A comparison between corresponding bands of the SPOT 1 HRV 1 and the Landsat 5 TM sensors indicated that the two instruments were radiometrically consistent to within about 5 percent. Retrieved surface reflectance factors using the FIFE algorithm over one site under clear atmospheric conditions indicated a capability to determine near-nadir surface reflectance factors to within about 0.01 at a reflectance of 0.06 in the visible (0.4-0.7 microns) and about 0.30 in the near infrared (0.7-1.2 microns) for all but the NS001 sensor. All three atmospheric correction procedures produced absolute reflectances to within 0.005 in the visible and near infrared. In the shortwave infrared (1.2-2.5 microns) region the three algorithms differed in the retrieved surface reflectances primarily owing to differences in predicted gaseous absorption. Although uncertainties in the measured surface reflectance in the shortwave infrared precluded definitive results, the 5S code appeared to predict gaseous transmission marginally more accurately than LOWTRAN 7.

  14. 9.9 Sales Grid Style Produces Results

    ERIC Educational Resources Information Center

    Blake, Robert R.; Mouton, Jane Srygley

    1970-01-01

    Selling effectiveness experiments have provided evidence that solution selling (problem solving) produces far better results than formula selling (sales technique oriented), hard sell, people-oriented selling, or order taking. (PT)

  15. Evaluation of registration, compression and classification algorithms. Volume 1: Results

    NASA Technical Reports Server (NTRS)

    Jayroe, R.; Atkinson, R.; Callas, L.; Hodges, J.; Gaggini, B.; Peterson, J.

    1979-01-01

    The registration, compression, and classification algorithms were selected on the basis that such a group would include most of the different and commonly used approaches. The results of the investigation indicate clearcut, cost effective choices for registering, compressing, and classifying multispectral imagery.

  16. Exact BPF and FBP algorithms for nonstandard saddle curves.

    PubMed

    Yu, Hengyong; Zhao, Shiying; Ye, Yangbo; Wang, Ge

    2005-11-01

    A hot topic in cone-beam CT research is exact cone-beam reconstruction from a general scanning trajectory. Particularly, a nonstandard saddle curve attracts attention, as this construct allows the continuous periodic scanning of a volume-of-interest (VOI). Here we evaluate two algorithms for reconstruction from data collected along a nonstandard saddle curve, which are in the filtered backprojection (FBP) and backprojection filtration (BPF) formats, respectively. Both the algorithms are implemented in a chord-based coordinate system. Then, a rebinning procedure is utilized to transform the reconstructed results into the natural coordinate system. The simulation results demonstrate that the FBP algorithm produces better image quality than the BPF algorithm, while both the algorithms exhibit similar noise characteristics.

  17. Combining a Deconvolution and a Universal Library Search Algorithm for the Nontarget Analysis of Data-Independent Acquisition Mode Liquid Chromatography-High-Resolution Mass Spectrometry Results.

    PubMed

    Samanipour, Saer; Reid, Malcolm J; Bæk, Kine; Thomas, Kevin V

    2018-04-17

    Nontarget analysis is considered one of the most comprehensive tools for the identification of unknown compounds in a complex sample analyzed via liquid chromatography coupled to high-resolution mass spectrometry (LC-HRMS). Due to the complexity of the data generated via LC-HRMS, the data-dependent acquisition mode, which produces the MS 2 spectra of a limited number of the precursor ions, has been one of the most common approaches used during nontarget screening. However, data-independent acquisition mode produces highly complex spectra that require proper deconvolution and library search algorithms. We have developed a deconvolution algorithm and a universal library search algorithm (ULSA) for the analysis of complex spectra generated via data-independent acquisition. These algorithms were validated and tested using both semisynthetic and real environmental data. A total of 6000 randomly selected spectra from MassBank were introduced across the total ion chromatograms of 15 sludge extracts at three levels of background complexity for the validation of the algorithms via semisynthetic data. The deconvolution algorithm successfully extracted more than 60% of the added ions in the analytical signal for 95% of processed spectra (i.e., 3 complexity levels multiplied by 6000 spectra). The ULSA ranked the correct spectra among the top three for more than 95% of cases. We further tested the algorithms with 5 wastewater effluent extracts for 59 artificial unknown analytes (i.e., their presence or absence was confirmed via target analysis). These algorithms did not produce any cases of false identifications while correctly identifying ∼70% of the total inquiries. The implications, capabilities, and the limitations of both algorithms are further discussed.

  18. A genetic algorithm for solving supply chain network design model

    NASA Astrophysics Data System (ADS)

    Firoozi, Z.; Ismail, N.; Ariafar, S. H.; Tang, S. H.; Ariffin, M. K. M. A.

    2013-09-01

    Network design is by nature costly and optimization models play significant role in reducing the unnecessary cost components of a distribution network. This study proposes a genetic algorithm to solve a distribution network design model. The structure of the chromosome in the proposed algorithm is defined in a novel way that in addition to producing feasible solutions, it also reduces the computational complexity of the algorithm. Computational results are presented to show the algorithm performance.

  19. Clustering algorithm for determining community structure in large networks

    NASA Astrophysics Data System (ADS)

    Pujol, Josep M.; Béjar, Javier; Delgado, Jordi

    2006-07-01

    We propose an algorithm to find the community structure in complex networks based on the combination of spectral analysis and modularity optimization. The clustering produced by our algorithm is as accurate as the best algorithms on the literature of modularity optimization; however, the main asset of the algorithm is its efficiency. The best match for our algorithm is Newman’s fast algorithm, which is the reference algorithm for clustering in large networks due to its efficiency. When both algorithms are compared, our algorithm outperforms the fast algorithm both in efficiency and accuracy of the clustering, in terms of modularity. Thus, the results suggest that the proposed algorithm is a good choice to analyze the community structure of medium and large networks in the range of tens and hundreds of thousand vertices.

  20. Spatial independent component analysis of functional MRI time-series: to what extent do results depend on the algorithm used?

    PubMed

    Esposito, Fabrizio; Formisano, Elia; Seifritz, Erich; Goebel, Rainer; Morrone, Renato; Tedeschi, Gioacchino; Di Salle, Francesco

    2002-07-01

    Independent component analysis (ICA) has been successfully employed to decompose functional MRI (fMRI) time-series into sets of activation maps and associated time-courses. Several ICA algorithms have been proposed in the neural network literature. Applied to fMRI, these algorithms might lead to different spatial or temporal readouts of brain activation. We compared the two ICA algorithms that have been used so far for spatial ICA (sICA) of fMRI time-series: the Infomax (Bell and Sejnowski [1995]: Neural Comput 7:1004-1034) and the Fixed-Point (Hyvärinen [1999]: Adv Neural Inf Proc Syst 10:273-279) algorithms. We evaluated the Infomax- and Fixed Point-based sICA decompositions of simulated motor, and real motor and visual activation fMRI time-series using an ensemble of measures. Log-likelihood (McKeown et al. [1998]: Hum Brain Mapp 6:160-188) was used as a measure of how significantly the estimated independent sources fit the statistical structure of the data; receiver operating characteristics (ROC) and linear correlation analyses were used to evaluate the algorithms' accuracy of estimating the spatial layout and the temporal dynamics of simulated and real activations; cluster sizing calculations and an estimation of a residual gaussian noise term within the components were used to examine the anatomic structure of ICA components and for the assessment of noise reduction capabilities. Whereas both algorithms produced highly accurate results, the Fixed-Point outperformed the Infomax in terms of spatial and temporal accuracy as long as inferential statistics were employed as benchmarks. Conversely, the Infomax sICA was superior in terms of global estimation of the ICA model and noise reduction capabilities. Because of its adaptive nature, the Infomax approach appears to be better suited to investigate activation phenomena that are not predictable or adequately modelled by inferential techniques. Copyright 2002 Wiley-Liss, Inc.

  1. Basic firefly algorithm for document clustering

    NASA Astrophysics Data System (ADS)

    Mohammed, Athraa Jasim; Yusof, Yuhanis; Husni, Husniza

    2015-12-01

    The Document clustering plays significant role in Information Retrieval (IR) where it organizes documents prior to the retrieval process. To date, various clustering algorithms have been proposed and this includes the K-means and Particle Swarm Optimization. Even though these algorithms have been widely applied in many disciplines due to its simplicity, such an approach tends to be trapped in a local minimum during its search for an optimal solution. To address the shortcoming, this paper proposes a Basic Firefly (Basic FA) algorithm to cluster text documents. The algorithm employs the Average Distance to Document Centroid (ADDC) as the objective function of the search. Experiments utilizing the proposed algorithm were conducted on the 20Newsgroups benchmark dataset. Results demonstrate that the Basic FA generates a more robust and compact clusters than the ones produced by K-means and Particle Swarm Optimization (PSO).

  2. Exact BPF and FBP algorithms for nonstandard saddle curves

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yu Hengyong; Zhao Shiying; Ye Yangbo

    2005-11-15

    A hot topic in cone-beam CT research is exact cone-beam reconstruction from a general scanning trajectory. Particularly, a nonstandard saddle curve attracts attention, as this construct allows the continuous periodic scanning of a volume-of-interest (VOI). Here we evaluate two algorithms for reconstruction from data collected along a nonstandard saddle curve, which are in the filtered backprojection (FBP) and backprojection filtration (BPF) formats, respectively. Both the algorithms are implemented in a chord-based coordinate system. Then, a rebinning procedure is utilized to transform the reconstructed results into the natural coordinate system. The simulation results demonstrate that the FBP algorithm produces better imagemore » quality than the BPF algorithm, while both the algorithms exhibit similar noise characteristics.« less

  3. Algorithm for calculating turbine cooling flow and the resulting decrease in turbine efficiency

    NASA Technical Reports Server (NTRS)

    Gauntner, J. W.

    1980-01-01

    An algorithm is presented for calculating both the quantity of compressor bleed flow required to cool the turbine and the decrease in turbine efficiency caused by the injection of cooling air into the gas stream. The algorithm, which is intended for an axial flow, air routine in a properly written thermodynamic cycle code. Ten different cooling configurations are available for each row of cooled airfoils in the turbine. Results from the algorithm are substantiated by comparison with flows predicted by major engine manufacturers for given bulk metal temperatures and given cooling configurations. A list of definitions for the terms in the subroutine is presented.

  4. Modelling Evolutionary Algorithms with Stochastic Differential Equations.

    PubMed

    Heredia, Jorge Pérez

    2017-11-20

    There has been renewed interest in modelling the behaviour of evolutionary algorithms (EAs) by more traditional mathematical objects, such as ordinary differential equations or Markov chains. The advantage is that the analysis becomes greatly facilitated due to the existence of well established methods. However, this typically comes at the cost of disregarding information about the process. Here, we introduce the use of stochastic differential equations (SDEs) for the study of EAs. SDEs can produce simple analytical results for the dynamics of stochastic processes, unlike Markov chains which can produce rigorous but unwieldy expressions about the dynamics. On the other hand, unlike ordinary differential equations (ODEs), they do not discard information about the stochasticity of the process. We show that these are especially suitable for the analysis of fixed budget scenarios and present analogues of the additive and multiplicative drift theorems from runtime analysis. In addition, we derive a new more general multiplicative drift theorem that also covers non-elitist EAs. This theorem simultaneously allows for positive and negative results, providing information on the algorithm's progress even when the problem cannot be optimised efficiently. Finally, we provide results for some well-known heuristics namely Random Walk (RW), Random Local Search (RLS), the (1+1) EA, the Metropolis Algorithm (MA), and the Strong Selection Weak Mutation (SSWM) algorithm.

  5. Algorithm-guided treatment of depression reduces treatment costs--results from the randomized controlled German Algorithm Project (GAPII).

    PubMed

    Ricken, Roland; Wiethoff, Katja; Reinhold, Thomas; Schietsch, Kathrin; Stamm, Thomas; Kiermeir, Julia; Neu, Peter; Heinz, Andreas; Bauer, Michael; Adli, Mazda

    2011-11-01

    The German Algorithm Project, Phase 2 (GAP2) revealed that a standardized stepwise treatment regimen (SSTR) results in better treatment outcomes than treatment as usual (TAU) in depressed inpatients. The objective of this study was a health economic evaluation of SSTR based on a cost effectiveness analysis (CEA). GAP2 was a randomized controlled study with 148 patients. In an intention to treat (ITT) analysis direct treatment costs for study duration (SD) and total time in hospital (TTH; enrolment to discharge) were calculated based on daily hospital charges followed by a CEA to calculate cost expenditure per remitted patient. Treatment costs in SSTR compared to TAU were significantly lower for SD (SSTR: 10 830 € ± 8 632 €, TAU: 15 202 € ± 12 483 €; p = 0.026) and did not differ significantly for TTH (SSTR: 21 561 € ± 16 162 €; TAU: 18 248 € ± 13 454; p = 0.208). CEA revealed that the costs per remission in SSTR were significantly lower for SD (SSTR: 20 035 € ± 15 970 €; SSTR: 38 793 € ± 31 853 €; p<0.0001) and TTH (SSTR: 31 285 € ± 23 451 €; TAU: 38 581 € ± 28 449 €, p = 0.041). Indirect costs were not assessed. Different dropout rates in TAU and SSTR complicated interpretation of data. An SSTR-based algorithm results in a superior cost effectiveness at no significant extra costs. Implementation of treatment algorithms in inpatient-care may help reduce treatment costs. Copyright © 2011 Elsevier B.V. All rights reserved.

  6. A New Approximate Chimera Donor Cell Search Algorithm

    NASA Technical Reports Server (NTRS)

    Holst, Terry L.; Nixon, David (Technical Monitor)

    1998-01-01

    The objectives of this study were to develop chimera-based full potential methodology which is compatible with overflow (Euler/Navier-Stokes) chimera flow solver and to develop a fast donor cell search algorithm that is compatible with the chimera full potential approach. Results of this work included presenting a new donor cell search algorithm suitable for use with a chimera-based full potential solver. This algorithm was found to be extremely fast and simple producing donor cells as fast as 60,000 per second.

  7. Scheduling Earth Observing Satellites with Evolutionary Algorithms

    NASA Technical Reports Server (NTRS)

    Globus, Al; Crawford, James; Lohn, Jason; Pryor, Anna

    2003-01-01

    We hypothesize that evolutionary algorithms can effectively schedule coordinated fleets of Earth observing satellites. The constraints are complex and the bottlenecks are not well understood, a condition where evolutionary algorithms are often effective. This is, in part, because evolutionary algorithms require only that one can represent solutions, modify solutions, and evaluate solution fitness. To test the hypothesis we have developed a representative set of problems, produced optimization software (in Java) to solve them, and run experiments comparing techniques. This paper presents initial results of a comparison of several evolutionary and other optimization techniques; namely the genetic algorithm, simulated annealing, squeaky wheel optimization, and stochastic hill climbing. We also compare separate satellite vs. integrated scheduling of a two satellite constellation. While the results are not definitive, tests to date suggest that simulated annealing is the best search technique and integrated scheduling is superior.

  8. Comparison of multiobjective evolutionary algorithms: empirical results.

    PubMed

    Zitzler, E; Deb, K; Thiele, L

    2000-01-01

    In this paper, we provide a systematic comparison of various evolutionary approaches to multiobjective optimization using six carefully chosen test functions. Each test function involves a particular feature that is known to cause difficulty in the evolutionary optimization process, mainly in converging to the Pareto-optimal front (e.g., multimodality and deception). By investigating these different problem features separately, it is possible to predict the kind of problems to which a certain technique is or is not well suited. However, in contrast to what was suspected beforehand, the experimental results indicate a hierarchy of the algorithms under consideration. Furthermore, the emerging effects are evidence that the suggested test functions provide sufficient complexity to compare multiobjective optimizers. Finally, elitism is shown to be an important factor for improving evolutionary multiobjective search.

  9. Algorithme intelligent d'optimisation d'un design structurel de grande envergure

    NASA Astrophysics Data System (ADS)

    Dominique, Stephane

    genetic algorithm that prevents new individuals to be born too close to previously evaluated solutions. The restricted area becomes smaller or larger during the optimisation to allow global or local search when necessary. Also, a new search operator named Substitution Operator is incorporated in GATE. This operator allows an ANN surrogate model to guide the algorithm toward the most promising areas of the design space. The suggested CBR approach and GATE were tested on several simple test problems, as well as on the industrial problem of designing a gas turbine engine rotor's disc. These results are compared to other results obtained for the same problems by many other popular optimisation algorithms, such as (depending of the problem) gradient algorithms, binary genetic algorithm, real number genetic algorithm, genetic algorithm using multiple parents crossovers, differential evolution genetic algorithm, Hookes & Jeeves generalized pattern search method and POINTER from the software I-SIGHT 3.5. Results show that GATE is quite competitive, giving the best results for 5 of the 6 constrained optimisation problem. GATE also provided the best results of all on problem produced by a Maximum Set Gaussian landscape generator. Finally, GATE provided a disc 4.3% lighter than the best other tested algorithm (POINTER) for the gas turbine engine rotor's disc problem. One drawback of GATE is a lesser efficiency for highly multimodal unconstrained problems, for which he gave quite poor results with respect to its implementation cost. To conclude, according to the preliminary results obtained during this thesis, the suggested CBR process, combined with GATE, seems to be a very good candidate to automate and accelerate the structural design of mechanical devices, potentially reducing significantly the cost of industrial preliminary design processes.

  10. Implementation and comparative analysis of the optimisations produced by evolutionary algorithms for the parameter extraction of PSP MOSFET model

    NASA Astrophysics Data System (ADS)

    Hadia, Sarman K.; Thakker, R. A.; Bhatt, Kirit R.

    2016-05-01

    The study proposes an application of evolutionary algorithms, specifically an artificial bee colony (ABC), variant ABC and particle swarm optimisation (PSO), to extract the parameters of metal oxide semiconductor field effect transistor (MOSFET) model. These algorithms are applied for the MOSFET parameter extraction problem using a Pennsylvania surface potential model. MOSFET parameter extraction procedures involve reducing the error between measured and modelled data. This study shows that ABC algorithm optimises the parameter values based on intelligent activities of honey bee swarms. Some modifications have also been applied to the basic ABC algorithm. Particle swarm optimisation is a population-based stochastic optimisation method that is based on bird flocking activities. The performances of these algorithms are compared with respect to the quality of the solutions. The simulation results of this study show that the PSO algorithm performs better than the variant ABC and basic ABC algorithm for the parameter extraction of the MOSFET model; also the implementation of the ABC algorithm is shown to be simpler than that of the PSO algorithm.

  11. The Texas medication algorithm project: clinical results for schizophrenia.

    PubMed

    Miller, Alexander L; Crismon, M Lynn; Rush, A John; Chiles, John; Kashner, T Michael; Toprac, Marcia; Carmody, Thomas; Biggs, Melanie; Shores-Wilson, Kathy; Chiles, Judith; Witte, Brad; Bow-Thomas, Christine; Velligan, Dawn I; Trivedi, Madhukar; Suppes, Trisha; Shon, Steven

    2004-01-01

    In the Texas Medication Algorithm Project (TMAP), patients were given algorithm-guided treatment (ALGO) or treatment as usual (TAU). The ALGO intervention included a clinical coordinator to assist the physicians and administer a patient and family education program. The primary comparison in the schizophrenia module of TMAP was between patients seen in clinics in which ALGO was used (n = 165) and patients seen in clinics in which no algorithms were used (n = 144). A third group of patients, seen in clinics using an algorithm for bipolar or major depressive disorder but not for schizophrenia, was also studied (n = 156). The ALGO group had modestly greater improvement in symptoms (Brief Psychiatric Rating Scale) during the first quarter of treatment. The TAU group caught up by the end of 12 months. Cognitive functions were more improved in ALGO than in TAU at 3 months, and this difference was greater at 9 months (the final cognitive assessment). In secondary comparisons of ALGO with the second TAU group, the greater improvement in cognitive functioning was again noted, but the initial symptom difference was not significant.

  12. Evaluation and analysis of Seasat-A scanning multichannel Microwave Radiometer (SMMR) Antenna Pattern Correction (APC) algorithm

    NASA Technical Reports Server (NTRS)

    Kitzis, J. L.; Kitzis, S. N.

    1979-01-01

    The brightness temperature data produced by the SMMR final Antenna Pattern Correction (APC) algorithm is discussed. The algorithm consisted of: (1) a direct comparison of the outputs of the final and interim APC algorithms; and (2) an analysis of a possible relationship between observed cross track gradients in the interim brightness temperatures and the asymmetry in the antenna temperature data. Results indicate a bias between the brightness temperature produced by the final and interim APC algorithm.

  13. Enhanced Handover Decision Algorithm in Heterogeneous Wireless Network

    PubMed Central

    Abdullah, Radhwan Mohamed; Zukarnain, Zuriati Ahmad

    2017-01-01

    Transferring a huge amount of data between different network locations over the network links depends on the network’s traffic capacity and data rate. Traditionally, a mobile device may be moved to achieve the operations of vertical handover, considering only one criterion, that is the Received Signal Strength (RSS). The use of a single criterion may cause service interruption, an unbalanced network load and an inefficient vertical handover. In this paper, we propose an enhanced vertical handover decision algorithm based on multiple criteria in the heterogeneous wireless network. The algorithm consists of three technology interfaces: Long-Term Evolution (LTE), Worldwide interoperability for Microwave Access (WiMAX) and Wireless Local Area Network (WLAN). It also employs three types of vertical handover decision algorithms: equal priority, mobile priority and network priority. The simulation results illustrate that the three types of decision algorithms outperform the traditional network decision algorithm in terms of handover number probability and the handover failure probability. In addition, it is noticed that the network priority handover decision algorithm produces better results compared to the equal priority and the mobile priority handover decision algorithm. Finally, the simulation results are validated by the analytical model. PMID:28708067

  14. Filtered refocusing: a volumetric reconstruction algorithm for plenoptic-PIV

    NASA Astrophysics Data System (ADS)

    Fahringer, Timothy W.; Thurow, Brian S.

    2016-09-01

    A new algorithm for reconstruction of 3D particle fields from plenoptic image data is presented. The algorithm is based on the technique of computational refocusing with the addition of a post reconstruction filter to remove the out of focus particles. This new algorithm is tested in terms of reconstruction quality on synthetic particle fields as well as a synthetically generated 3D Gaussian ring vortex. Preliminary results indicate that the new algorithm performs as well as the MART algorithm (used in previous work) in terms of the reconstructed particle position accuracy, but produces more elongated particles. The major advantage to the new algorithm is the dramatic reduction in the computational cost required to reconstruct a volume. It is shown that the new algorithm takes 1/9th the time to reconstruct the same volume as MART while using minimal resources. Experimental results are presented in the form of the wake behind a cylinder at a Reynolds number of 185.

  15. Gaia Data Release 1. Cross-match with external catalogues. Algorithm and results

    NASA Astrophysics Data System (ADS)

    Marrese, P. M.; Marinoni, S.; Fabrizio, M.; Giuffrida, G.

    2017-11-01

    Context. Although the Gaia catalogue on its own will be a very powerful tool, it is the combination of this highly accurate archive with other archives that will truly open up amazing possibilities for astronomical research. The advanced interoperation of archives is based on cross-matching, leaving the user with the feeling of working with one single data archive. The data retrieval should work not only across data archives, but also across wavelength domains. The first step for seamless data access is the computation of the cross-match between Gaia and external surveys. Aims: The matching of astronomical catalogues is a complex and challenging problem both scientifically and technologically (especially when matching large surveys like Gaia). We describe the cross-match algorithm used to pre-compute the match of Gaia Data Release 1 (DR1) with a selected list of large publicly available optical and IR surveys. Methods: The overall principles of the adopted cross-match algorithm are outlined. Details are given on the developed algorithm, including the methods used to account for position errors, proper motions, and environment; to define the neighbours; and to define the figure of merit used to select the most probable counterpart. Results: Statistics on the results are also given. The results of the cross-match are part of the official Gaia DR1 catalogue.

  16. Routing design and fleet allocation optimization of freeway service patrol: Improved results using genetic algorithm

    NASA Astrophysics Data System (ADS)

    Sun, Xiuqiao; Wang, Jian

    2018-07-01

    Freeway service patrol (FSP), is considered to be an effective method for incident management and can help transportation agency decision-makers alter existing route coverage and fleet allocation. This paper investigates the FSP problem of patrol routing design and fleet allocation, with the objective of minimizing the overall average incident response time. While the simulated annealing (SA) algorithm and its improvements have been applied to solve this problem, they often become trapped in local optimal solution. Moreover, the issue of searching efficiency remains to be further addressed. In this paper, we employ the genetic algorithm (GA) and SA to solve the FSP problem. To maintain population diversity and avoid premature convergence, niche strategy is incorporated into the traditional genetic algorithm. We also employ elitist strategy to speed up the convergence. Numerical experiments have been conducted with the help of the Sioux Falls network. Results show that the GA slightly outperforms the dual-based greedy (DBG) algorithm, the very large-scale neighborhood searching (VLNS) algorithm, the SA algorithm and the scenario algorithm.

  17. Statistically significant performance results of a mine detector and fusion algorithm from an x-band high-resolution SAR

    NASA Astrophysics Data System (ADS)

    Williams, Arnold C.; Pachowicz, Peter W.

    2004-09-01

    Current mine detection research indicates that no single sensor or single look from a sensor will detect mines/minefields in a real-time manner at a performance level suitable for a forward maneuver unit. Hence, the integrated development of detectors and fusion algorithms are of primary importance. A problem in this development process has been the evaluation of these algorithms with relatively small data sets, leading to anecdotal and frequently over trained results. These anecdotal results are often unreliable and conflicting among various sensors and algorithms. Consequently, the physical phenomena that ought to be exploited and the performance benefits of this exploitation are often ambiguous. The Army RDECOM CERDEC Night Vision Laboratory and Electron Sensors Directorate has collected large amounts of multisensor data such that statistically significant evaluations of detection and fusion algorithms can be obtained. Even with these large data sets care must be taken in algorithm design and data processing to achieve statistically significant performance results for combined detectors and fusion algorithms. This paper discusses statistically significant detection and combined multilook fusion results for the Ellipse Detector (ED) and the Piecewise Level Fusion Algorithm (PLFA). These statistically significant performance results are characterized by ROC curves that have been obtained through processing this multilook data for the high resolution SAR data of the Veridian X-Band radar. We discuss the implications of these results on mine detection and the importance of statistical significance, sample size, ground truth, and algorithm design in performance evaluation.

  18. An enhanced fast scanning algorithm for image segmentation

    NASA Astrophysics Data System (ADS)

    Ismael, Ahmed Naser; Yusof, Yuhanis binti

    2015-12-01

    Segmentation is an essential and important process that separates an image into regions that have similar characteristics or features. This will transform the image for a better image analysis and evaluation. An important benefit of segmentation is the identification of region of interest in a particular image. Various algorithms have been proposed for image segmentation and this includes the Fast Scanning algorithm which has been employed on food, sport and medical images. It scans all pixels in the image and cluster each pixel according to the upper and left neighbor pixels. The clustering process in Fast Scanning algorithm is performed by merging pixels with similar neighbor based on an identified threshold. Such an approach will lead to a weak reliability and shape matching of the produced segments. This paper proposes an adaptive threshold function to be used in the clustering process of the Fast Scanning algorithm. This function used the gray'value in the image's pixels and variance Also, the level of the image that is more the threshold are converted into intensity values between 0 and 1, and other values are converted into intensity values zero. The proposed enhanced Fast Scanning algorithm is realized on images of the public and private transportation in Iraq. Evaluation is later made by comparing the produced images of proposed algorithm and the standard Fast Scanning algorithm. The results showed that proposed algorithm is faster in terms the time from standard fast scanning.

  19. An optimized algorithm for multiscale wideband deconvolution of radio astronomical images

    NASA Astrophysics Data System (ADS)

    Offringa, A. R.; Smirnov, O.

    2017-10-01

    We describe a new multiscale deconvolution algorithm that can also be used in a multifrequency mode. The algorithm only affects the minor clean loop. In single-frequency mode, the minor loop of our improved multiscale algorithm is over an order of magnitude faster than the casa multiscale algorithm, and produces results of similar quality. For multifrequency deconvolution, a technique named joined-channel cleaning is used. In this mode, the minor loop of our algorithm is two to three orders of magnitude faster than casa msmfs. We extend the multiscale mode with automated scale-dependent masking, which allows structures to be cleaned below the noise. We describe a new scale-bias function for use in multiscale cleaning. We test a second deconvolution method that is a variant of the moresane deconvolution technique, and uses a convex optimization technique with isotropic undecimated wavelets as dictionary. On simple well-calibrated data, the convex optimization algorithm produces visually more representative models. On complex or imperfect data, the convex optimization algorithm has stability issues.

  20. Progress on automated data analysis algorithms for ultrasonic inspection of composites

    NASA Astrophysics Data System (ADS)

    Aldrin, John C.; Forsyth, David S.; Welter, John T.

    2015-03-01

    Progress is presented on the development and demonstration of automated data analysis (ADA) software to address the burden in interpreting ultrasonic inspection data for large composite structures. The automated data analysis algorithm is presented in detail, which follows standard procedures for analyzing signals for time-of-flight indications and backwall amplitude dropout. New algorithms have been implemented to reliably identify indications in time-of-flight images near the front and back walls of composite panels. Adaptive call criteria have also been applied to address sensitivity to variation in backwall signal level, panel thickness variation, and internal signal noise. ADA processing results are presented for a variety of test specimens that include inserted materials and discontinuities produced under poor manufacturing conditions. Software tools have been developed to support both ADA algorithm design and certification, producing a statistical evaluation of indication results and false calls using a matching process with predefined truth tables. Parametric studies were performed to evaluate detection and false call results with respect to varying algorithm settings.

  1. Elaboration of a semi-automated algorithm for brain arteriovenous malformation segmentation: initial results.

    PubMed

    Clarençon, Frédéric; Maizeroi-Eugène, Franck; Bresson, Damien; Maingreaud, Flavien; Sourour, Nader; Couquet, Claude; Ayoub, David; Chiras, Jacques; Yardin, Catherine; Mounayer, Charbel

    2015-02-01

    The purpose of our study was to distinguish the different components of a brain arteriovenous malformation (bAVM) on 3D rotational angiography (3D-RA) using a semi-automated segmentation algorithm. Data from 3D-RA of 15 patients (8 males, 7 females; 14 supratentorial bAVMs, 1 infratentorial) were used to test the algorithm. Segmentation was performed in two steps: (1) nidus segmentation from propagation (vertical then horizontal) of tagging on the reference slice (i.e., the slice on which the nidus had the biggest surface); (2) contiguity propagation (based on density and variance) from tagging of arteries and veins distant from the nidus. Segmentation quality was evaluated by comparison with six frame/s DSA by two independent reviewers. Analysis of supraselective microcatheterisation was performed to dispel discrepancy. Mean duration for bAVM segmentation was 64 ± 26 min. Quality of segmentation was evaluated as good or fair in 93% of cases. Segmentation had better results than six frame/s DSA for the depiction of a focal ectasia on the main draining vein and for the evaluation of the venous drainage pattern. This segmentation algorithm is a promising tool that may help improve the understanding of bAVM angio-architecture, especially the venous drainage. • The segmentation algorithm allows for the distinction of the AVM's components • This algorithm helps to see the venous drainage of bAVMs more precisely • This algorithm may help to reduce the treatment-related complication rate.

  2. Improved Ant Colony Clustering Algorithm and Its Performance Study

    PubMed Central

    Gao, Wei

    2016-01-01

    Clustering analysis is used in many disciplines and applications; it is an important tool that descriptively identifies homogeneous groups of objects based on attribute values. The ant colony clustering algorithm is a swarm-intelligent method used for clustering problems that is inspired by the behavior of ant colonies that cluster their corpses and sort their larvae. A new abstraction ant colony clustering algorithm using a data combination mechanism is proposed to improve the computational efficiency and accuracy of the ant colony clustering algorithm. The abstraction ant colony clustering algorithm is used to cluster benchmark problems, and its performance is compared with the ant colony clustering algorithm and other methods used in existing literature. Based on similar computational difficulties and complexities, the results show that the abstraction ant colony clustering algorithm produces results that are not only more accurate but also more efficiently determined than the ant colony clustering algorithm and the other methods. Thus, the abstraction ant colony clustering algorithm can be used for efficient multivariate data clustering. PMID:26839533

  3. Approximation algorithms for planning and control

    NASA Technical Reports Server (NTRS)

    Boddy, Mark; Dean, Thomas

    1989-01-01

    A control system operating in a complex environment will encounter a variety of different situations, with varying amounts of time available to respond to critical events. Ideally, such a control system will do the best possible with the time available. In other words, its responses should approximate those that would result from having unlimited time for computation, where the degree of the approximation depends on the amount of time it actually has. There exist approximation algorithms for a wide variety of problems. Unfortunately, the solution to any reasonably complex control problem will require solving several computationally intensive problems. Algorithms for successive approximation are a subclass of the class of anytime algorithms, algorithms that return answers for any amount of computation time, where the answers improve as more time is allotted. An architecture is described for allocating computation time to a set of anytime algorithms, based on expectations regarding the value of the answers they return. The architecture described is quite general, producing optimal schedules for a set of algorithms under widely varying conditions.

  4. Motion Cueing Algorithm Development: Human-Centered Linear and Nonlinear Approaches

    NASA Technical Reports Server (NTRS)

    Houck, Jacob A. (Technical Monitor); Telban, Robert J.; Cardullo, Frank M.

    2005-01-01

    While the performance of flight simulator motion system hardware has advanced substantially, the development of the motion cueing algorithm, the software that transforms simulated aircraft dynamics into realizable motion commands, has not kept pace. Prior research identified viable features from two algorithms: the nonlinear "adaptive algorithm", and the "optimal algorithm" that incorporates human vestibular models. A novel approach to motion cueing, the "nonlinear algorithm" is introduced that combines features from both approaches. This algorithm is formulated by optimal control, and incorporates a new integrated perception model that includes both visual and vestibular sensation and the interaction between the stimuli. Using a time-varying control law, the matrix Riccati equation is updated in real time by a neurocomputing approach. Preliminary pilot testing resulted in the optimal algorithm incorporating a new otolith model, producing improved motion cues. The nonlinear algorithm vertical mode produced a motion cue with a time-varying washout, sustaining small cues for longer durations and washing out large cues more quickly compared to the optimal algorithm. The inclusion of the integrated perception model improved the responses to longitudinal and lateral cues. False cues observed with the NASA adaptive algorithm were absent. The neurocomputing approach was crucial in that the number of presentations of an input vector could be reduced to meet the real time requirement without degrading the quality of the motion cues.

  5. An Algorithm for Automatically Modifying Train Crew Schedule

    NASA Astrophysics Data System (ADS)

    Takahashi, Satoru; Kataoka, Kenji; Kojima, Teruhito; Asami, Masayuki

    Once the break-down of the train schedule occurs, the crew schedule as well as the train schedule has to be modified as quickly as possible to restore them. In this paper, we propose an algorithm for automatically modifying a crew schedule that takes all constraints into consideration, presenting a model of the combined problem of crews and trains. The proposed algorithm builds an initial solution by relaxing some of the constraint conditions, and then uses a Taboo-search method to revise this solution in order to minimize the degree of constraint violation resulting from these relaxed conditions. Then we show not only that the algorithm can generate a constraint satisfaction solution, but also that the solution will satisfy the experts. That is, we show the proposed algorithm is capable of producing a usable solution in a short time by applying to actual cases of train-schedule break-down, and that the solution is at least as good as those produced manually, by comparing the both solutions with several point of view.

  6. A Comparison of Lung Nodule Segmentation Algorithms: Methods and Results from a Multi-institutional Study.

    PubMed

    Kalpathy-Cramer, Jayashree; Zhao, Binsheng; Goldgof, Dmitry; Gu, Yuhua; Wang, Xingwei; Yang, Hao; Tan, Yongqiang; Gillies, Robert; Napel, Sandy

    2016-08-01

    Tumor volume estimation, as well as accurate and reproducible borders segmentation in medical images, are important in the diagnosis, staging, and assessment of response to cancer therapy. The goal of this study was to demonstrate the feasibility of a multi-institutional effort to assess the repeatability and reproducibility of nodule borders and volume estimate bias of computerized segmentation algorithms in CT images of lung cancer, and to provide results from such a study. The dataset used for this evaluation consisted of 52 tumors in 41 CT volumes (40 patient datasets and 1 dataset containing scans of 12 phantom nodules of known volume) from five collections available in The Cancer Imaging Archive. Three academic institutions developing lung nodule segmentation algorithms submitted results for three repeat runs for each of the nodules. We compared the performance of lung nodule segmentation algorithms by assessing several measurements of spatial overlap and volume measurement. Nodule sizes varied from 29 μl to 66 ml and demonstrated a diversity of shapes. Agreement in spatial overlap of segmentations was significantly higher for multiple runs of the same algorithm than between segmentations generated by different algorithms (p < 0.05) and was significantly higher on the phantom dataset compared to the other datasets (p < 0.05). Algorithms differed significantly in the bias of the measured volumes of the phantom nodules (p < 0.05) underscoring the need for assessing performance on clinical data in addition to phantoms. Algorithms that most accurately estimated nodule volumes were not the most repeatable, emphasizing the need to evaluate both their accuracy and precision. There were considerable differences between algorithms, especially in a subset of heterogeneous nodules, underscoring the recommendation that the same software be used at all time points in longitudinal studies.

  7. Comparing a Coevolutionary Genetic Algorithm for Multiobjective Optimization

    NASA Technical Reports Server (NTRS)

    Lohn, Jason D.; Kraus, William F.; Haith, Gary L.; Clancy, Daniel (Technical Monitor)

    2002-01-01

    We present results from a study comparing a recently developed coevolutionary genetic algorithm (CGA) against a set of evolutionary algorithms using a suite of multiobjective optimization benchmarks. The CGA embodies competitive coevolution and employs a simple, straightforward target population representation and fitness calculation based on developmental theory of learning. Because of these properties, setting up the additional population is trivial making implementation no more difficult than using a standard GA. Empirical results using a suite of two-objective test functions indicate that this CGA performs well at finding solutions on convex, nonconvex, discrete, and deceptive Pareto-optimal fronts, while giving respectable results on a nonuniform optimization. On a multimodal Pareto front, the CGA finds a solution that dominates solutions produced by eight other algorithms, yet the CGA has poor coverage across the Pareto front.

  8. Modal characterization of the ASCIE segmented optics testbed: New algorithms and experimental results

    NASA Technical Reports Server (NTRS)

    Carrier, Alain C.; Aubrun, Jean-Noel

    1993-01-01

    New frequency response measurement procedures, on-line modal tuning techniques, and off-line modal identification algorithms are developed and applied to the modal identification of the Advanced Structures/Controls Integrated Experiment (ASCIE), a generic segmented optics telescope test-bed representative of future complex space structures. The frequency response measurement procedure uses all the actuators simultaneously to excite the structure and all the sensors to measure the structural response so that all the transfer functions are measured simultaneously. Structural responses to sinusoidal excitations are measured and analyzed to calculate spectral responses. The spectral responses in turn are analyzed as the spectral data become available and, which is new, the results are used to maintain high quality measurements. Data acquisition, processing, and checking procedures are fully automated. As the acquisition of the frequency response progresses, an on-line algorithm keeps track of the actuator force distribution that maximizes the structural response to automatically tune to a structural mode when approaching a resonant frequency. This tuning is insensitive to delays, ill-conditioning, and nonproportional damping. Experimental results show that is useful for modal surveys even in high modal density regions. For thorough modeling, a constructive procedure is proposed to identify the dynamics of a complex system from its frequency response with the minimization of a least-squares cost function as a desirable objective. This procedure relies on off-line modal separation algorithms to extract modal information and on least-squares parameter subset optimization to combine the modal results and globally fit the modal parameters to the measured data. The modal separation algorithms resolved modal density of 5 modes/Hz in the ASCIE experiment. They promise to be useful in many challenging applications.

  9. Coupled Inertial Navigation and Flush Air Data Sensing Algorithm for Atmosphere Estimation

    NASA Technical Reports Server (NTRS)

    Karlgaard, Christopher D.; Kutty, Prasad; Schoenenberger, Mark

    2016-01-01

    This paper describes an algorithm for atmospheric state estimation based on a coupling between inertial navigation and flush air data-sensing pressure measurements. The navigation state is used in the atmospheric estimation algorithm along with the pressure measurements and a model of the surface pressure distribution to estimate the atmosphere using a nonlinear weighted least-squares algorithm. The approach uses a high-fidelity model of atmosphere stored in table-lookup form, along with simplified models propagated along the trajectory within the algorithm to aid the solution. Thus, the method is a reduced-order Kalman filter in which the inertial states are taken from the navigation solution and atmospheric states are estimated in the filter. The algorithm is applied to data from the Mars Science Laboratory entry, descent, and landing from August 2012. Reasonable estimates of the atmosphere are produced by the algorithm. The observability of winds along the trajectory are examined using an index based on the observability Gramian and the pressure measurement sensitivity matrix. The results indicate that bank reversals are responsible for adding information content. The algorithm is applied to the design of the pressure measurement system for the Mars 2020 mission. A linear covariance analysis is performed to assess estimator performance. The results indicate that the new estimator produces more precise estimates of atmospheric states than existing algorithms.

  10. Ionospheric-thermospheric UV tomography: 1. Image space reconstruction algorithms

    NASA Astrophysics Data System (ADS)

    Dymond, K. F.; Budzien, S. A.; Hei, M. A.

    2017-03-01

    We present and discuss two algorithms of the class known as Image Space Reconstruction Algorithms (ISRAs) that we are applying to the solution of large-scale ionospheric tomography problems. ISRAs have several desirable features that make them useful for ionospheric tomography. In addition to producing nonnegative solutions, ISRAs are amenable to sparse-matrix formulations and are fast, stable, and robust. We present the results of our studies of two types of ISRA: the Least Squares Positive Definite and the Richardson-Lucy algorithms. We compare their performance to the Multiplicative Algebraic Reconstruction and Conjugate Gradient Least Squares algorithms. We then discuss the use of regularization in these algorithms and present our new approach based on regularization to a partial differential equation.

  11. N-Dimensional LLL Reduction Algorithm with Pivoted Reflection

    PubMed Central

    Deng, Zhongliang; Zhu, Di

    2018-01-01

    The Lenstra-Lenstra-Lovász (LLL) lattice reduction algorithm and many of its variants have been widely used by cryptography, multiple-input-multiple-output (MIMO) communication systems and carrier phase positioning in global navigation satellite system (GNSS) to solve the integer least squares (ILS) problem. In this paper, we propose an n-dimensional LLL reduction algorithm (n-LLL), expanding the Lovász condition in LLL algorithm to n-dimensional space in order to obtain a further reduced basis. We also introduce pivoted Householder reflection into the algorithm to optimize the reduction time. For an m-order positive definite matrix, analysis shows that the n-LLL reduction algorithm will converge within finite steps and always produce better results than the original LLL reduction algorithm with n > 2. The simulations clearly prove that n-LLL is better than the original LLL in reducing the condition number of an ill-conditioned input matrix with 39% improvement on average for typical cases, which can significantly reduce the searching space for solving ILS problem. The simulation results also show that the pivoted reflection has significantly declined the number of swaps in the algorithm by 57%, making n-LLL a more practical reduction algorithm. PMID:29351224

  12. Application of Least Mean Square Algorithms to Spacecraft Vibration Compensation

    NASA Technical Reports Server (NTRS)

    Woodard , Stanley E.; Nagchaudhuri, Abhijit

    1998-01-01

    This paper describes the application of the Least Mean Square (LMS) algorithm in tandem with the Filtered-X Least Mean Square algorithm for controlling a science instrument's line-of-sight pointing. Pointing error is caused by a periodic disturbance and spacecraft vibration. A least mean square algorithm is used on-orbit to produce the transfer function between the instrument's servo-mechanism and error sensor. The result is a set of adaptive transversal filter weights tuned to the transfer function. The Filtered-X LMS algorithm, which is an extension of the LMS, tunes a set of transversal filter weights to the transfer function between the disturbance source and the servo-mechanism's actuation signal. The servo-mechanism's resulting actuation counters the disturbance response and thus maintains accurate science instrumental pointing. A simulation model of the Upper Atmosphere Research Satellite is used to demonstrate the algorithms.

  13. Three-dimensional particle tracking velocimetry algorithm based on tetrahedron vote

    NASA Astrophysics Data System (ADS)

    Cui, Yutong; Zhang, Yang; Jia, Pan; Wang, Yuan; Huang, Jingcong; Cui, Junlei; Lai, Wing T.

    2018-02-01

    A particle tracking velocimetry algorithm based on tetrahedron vote, which is named TV-PTV, is proposed to overcome the limited selection problem of effective algorithms for 3D flow visualisation. In this new cluster-matching algorithm, tetrahedrons produced by the Delaunay tessellation are used as the basic units for inter-frame matching, which results in a simple algorithmic structure of only two independent preset parameters. Test results obtained using the synthetic test image data from the Visualisation Society of Japan show that TV-PTV presents accuracy comparable to that of the classical algorithm based on new relaxation method (NRX). Compared with NRX, TV-PTV possesses a smaller number of loops in programming and thus a shorter computing time, especially for large particle displacements and high particle concentration. TV-PTV is confirmed practically effective using an actual 3D wake flow.

  14. A comprehensive performance evaluation on the prediction results of existing cooperative transcription factors identification algorithms.

    PubMed

    Lai, Fu-Jou; Chang, Hong-Tsun; Huang, Yueh-Min; Wu, Wei-Sheng

    2014-01-01

    Eukaryotic transcriptional regulation is known to be highly connected through the networks of cooperative transcription factors (TFs). Measuring the cooperativity of TFs is helpful for understanding the biological relevance of these TFs in regulating genes. The recent advances in computational techniques led to various predictions of cooperative TF pairs in yeast. As each algorithm integrated different data resources and was developed based on different rationales, it possessed its own merit and claimed outperforming others. However, the claim was prone to subjectivity because each algorithm compared with only a few other algorithms and only used a small set of performance indices for comparison. This motivated us to propose a series of indices to objectively evaluate the prediction performance of existing algorithms. And based on the proposed performance indices, we conducted a comprehensive performance evaluation. We collected 14 sets of predicted cooperative TF pairs (PCTFPs) in yeast from 14 existing algorithms in the literature. Using the eight performance indices we adopted/proposed, the cooperativity of each PCTFP was measured and a ranking score according to the mean cooperativity of the set was given to each set of PCTFPs under evaluation for each performance index. It was seen that the ranking scores of a set of PCTFPs vary with different performance indices, implying that an algorithm used in predicting cooperative TF pairs is of strength somewhere but may be of weakness elsewhere. We finally made a comprehensive ranking for these 14 sets. The results showed that Wang J's study obtained the best performance evaluation on the prediction of cooperative TF pairs in yeast. In this study, we adopted/proposed eight performance indices to make a comprehensive performance evaluation on the prediction results of 14 existing cooperative TFs identification algorithms. Most importantly, these proposed indices can be easily applied to measure the performance of new

  15. A weighted belief-propagation algorithm for estimating volume-related properties of random polytopes

    NASA Astrophysics Data System (ADS)

    Font-Clos, Francesc; Massucci, Francesco Alessandro; Pérez Castillo, Isaac

    2012-11-01

    In this work we introduce a novel weighted message-passing algorithm based on the cavity method for estimating volume-related properties of random polytopes, properties which are relevant in various research fields ranging from metabolic networks, to neural networks, to compressed sensing. We propose, as opposed to adopting the usual approach consisting in approximating the real-valued cavity marginal distributions by a few parameters, using an algorithm to faithfully represent the entire marginal distribution. We explain various alternatives for implementing the algorithm and benchmarking the theoretical findings by showing concrete applications to random polytopes. The results obtained with our approach are found to be in very good agreement with the estimates produced by the Hit-and-Run algorithm, known to produce uniform sampling.

  16. Algorithm Optimally Orders Forward-Chaining Inference Rules

    NASA Technical Reports Server (NTRS)

    James, Mark

    2008-01-01

    People typically develop knowledge bases in a somewhat ad hoc manner by incrementally adding rules with no specific organization. This often results in a very inefficient execution of those rules since they are so often order sensitive. This is relevant to tasks like Deep Space Network in that it allows the knowledge base to be incrementally developed and have it automatically ordered for efficiency. Although data flow analysis was first developed for use in compilers for producing optimal code sequences, its usefulness is now recognized in many software systems including knowledge-based systems. However, this approach for exhaustively computing data-flow information cannot directly be applied to inference systems because of the ubiquitous execution of the rules. An algorithm is presented that efficiently performs a complete producer/consumer analysis for each antecedent and consequence clause in a knowledge base to optimally order the rules to minimize inference cycles. An algorithm was developed that optimally orders a knowledge base composed of forwarding chaining inference rules such that independent inference cycle executions are minimized, thus, resulting in significantly faster execution. This algorithm was integrated into the JPL tool Spacecraft Health Inference Engine (SHINE) for verification and it resulted in a significant reduction in inference cycles for what was previously considered an ordered knowledge base. For a knowledge base that is completely unordered, then the improvement is much greater.

  17. Algorithms in Learning, Teaching, and Instructional Design. Studies in Systematic Instruction and Training Technical Report 51201.

    ERIC Educational Resources Information Center

    Gerlach, Vernon S.; And Others

    An algorithm is defined here as an unambiguous procedure which will always produce the correct result when applied to any problem of a given class of problems. This paper gives an extended discussion of the definition of an algorithm. It also explores in detail the elements of an algorithm, the representation of algorithms in standard prose, flow…

  18. Symbiotic organisms search algorithm for dynamic economic dispatch with valve-point effects

    NASA Astrophysics Data System (ADS)

    Sonmez, Yusuf; Kahraman, H. Tolga; Dosoglu, M. Kenan; Guvenc, Ugur; Duman, Serhat

    2017-05-01

    In this study, symbiotic organisms search (SOS) algorithm is proposed to solve the dynamic economic dispatch with valve-point effects problem, which is one of the most important problems of the modern power system. Some practical constraints like valve-point effects, ramp rate limits and prohibited operating zones have been considered as solutions. Proposed algorithm was tested on five different test cases in 5 units, 10 units and 13 units systems. The obtained results have been compared with other well-known metaheuristic methods reported before. Results show that proposed algorithm has a good convergence and produces better results than other methods.

  19. An Automated Algorithm for Producing Land Cover Information from Landsat Surface Reflectance Data Acquired Between 1984 and Present

    NASA Astrophysics Data System (ADS)

    Rover, J.; Goldhaber, M. B.; Holen, C.; Dittmeier, R.; Wika, S.; Steinwand, D.; Dahal, D.; Tolk, B.; Quenzer, R.; Nelson, K.; Wylie, B. K.; Coan, M.

    2015-12-01

    Multi-year land cover mapping from remotely sensed data poses challenges. Producing land cover products at spatial and temporal scales required for assessing longer-term trends in land cover change are typically a resource-limited process. A recently developed approach utilizes open source software libraries to automatically generate datasets, decision tree classifications, and data products while requiring minimal user interaction. Users are only required to supply coordinates for an area of interest, land cover from an existing source such as National Land Cover Database and percent slope from a digital terrain model for the same area of interest, two target acquisition year-day windows, and the years of interest between 1984 and present. The algorithm queries the Landsat archive for Landsat data intersecting the area and dates of interest. Cloud-free pixels meeting the user's criteria are mosaicked to create composite images for training the classifiers and applying the classifiers. Stratification of training data is determined by the user and redefined during an iterative process of reviewing classifiers and resulting predictions. The algorithm outputs include yearly land cover raster format data, graphics, and supporting databases for further analysis. Additional analytical tools are also incorporated into the automated land cover system and enable statistical analysis after data are generated. Applications tested include the impact of land cover change and water permanence. For example, land cover conversions in areas where shrubland and grassland were replaced by shale oil pads during hydrofracking of the Bakken Formation were quantified. Analytical analysis of spatial and temporal changes in surface water included identifying wetlands in the Prairie Pothole Region of North Dakota with potential connectivity to ground water, indicating subsurface permeability and geochemistry.

  20. Algorithms for accelerated convergence of adaptive PCA.

    PubMed

    Chatterjee, C; Kang, Z; Roychowdhury, V P

    2000-01-01

    We derive and discuss new adaptive algorithms for principal component analysis (PCA) that are shown to converge faster than the traditional PCA algorithms due to Oja, Sanger, and Xu. It is well known that traditional PCA algorithms that are derived by using gradient descent on an objective function are slow to converge. Furthermore, the convergence of these algorithms depends on appropriate choices of the gain sequences. Since online applications demand faster convergence and an automatic selection of gains, we present new adaptive algorithms to solve these problems. We first present an unconstrained objective function, which can be minimized to obtain the principal components. We derive adaptive algorithms from this objective function by using: 1) gradient descent; 2) steepest descent; 3) conjugate direction; and 4) Newton-Raphson methods. Although gradient descent produces Xu's LMSER algorithm, the steepest descent, conjugate direction, and Newton-Raphson methods produce new adaptive algorithms for PCA. We also provide a discussion on the landscape of the objective function, and present a global convergence proof of the adaptive gradient descent PCA algorithm using stochastic approximation theory. Extensive experiments with stationary and nonstationary multidimensional Gaussian sequences show faster convergence of the new algorithms over the traditional gradient descent methods.We also compare the steepest descent adaptive algorithm with state-of-the-art methods on stationary and nonstationary sequences.

  1. A Comparison of Direction Finding Results From an FFT Peak Identification Technique With Those From the Music Algorithm

    DTIC Science & Technology

    1991-07-01

    MUSIC ALGORITHM (U) by L.E. Montbrland go I July 1991 CRC REPORT NO. 1438 Ottawa I* Government of Canada Gouvsrnweient du Canada I o DParunnt of...FINDING RESULTS FROM AN FFT PEAK IDENTIFICATION TECHNIQUE WITH THOSE FROM THE MUSIC ALGORITHM (U) by L.E. Montbhrand CRC REPORT NO. 1438 July 1991...Ottawa A Comparison of Direction Finding Results From an FFT Peak Identification Technique With Those From the Music Algorithm L.E. Montbriand Abstract A

  2. Proposed hybrid-classifier ensemble algorithm to map snow cover area

    NASA Astrophysics Data System (ADS)

    Nijhawan, Rahul; Raman, Balasubramanian; Das, Josodhir

    2018-01-01

    Metaclassification ensemble approach is known to improve the prediction performance of snow-covered area. The methodology adopted in this case is based on neural network along with four state-of-art machine learning algorithms: support vector machine, artificial neural networks, spectral angle mapper, K-mean clustering, and a snow index: normalized difference snow index. An AdaBoost ensemble algorithm related to decision tree for snow-cover mapping is also proposed. According to available literature, these methods have been rarely used for snow-cover mapping. Employing the above techniques, a study was conducted for Raktavarn and Chaturangi Bamak glaciers, Uttarakhand, Himalaya using multispectral Landsat 7 ETM+ (enhanced thematic mapper) image. The study also compares the results with those obtained from statistical combination methods (majority rule and belief functions) and accuracies of individual classifiers. Accuracy assessment is performed by computing the quantity and allocation disagreement, analyzing statistic measures (accuracy, precision, specificity, AUC, and sensitivity) and receiver operating characteristic curves. A total of 225 combinations of parameters for individual classifiers were trained and tested on the dataset and results were compared with the proposed approach. It was observed that the proposed methodology produced the highest classification accuracy (95.21%), close to (94.01%) that was produced by the proposed AdaBoost ensemble algorithm. From the sets of observations, it was concluded that the ensemble of classifiers produced better results compared to individual classifiers.

  3. A multi-band semi-analytical algorithm for estimating chlorophyll-a concentration in the Yellow River Estuary, China.

    PubMed

    Chen, Jun; Quan, Wenting; Cui, Tingwei

    2015-01-01

    In this study, two sample semi-analytical algorithms and one new unified multi-band semi-analytical algorithm (UMSA) for estimating chlorophyll-a (Chla) concentration were constructed by specifying optimal wavelengths. The three sample semi-analytical algorithms, including the three-band semi-analytical algorithm (TSA), four-band semi-analytical algorithm (FSA), and UMSA algorithm, were calibrated and validated by the dataset collected in the Yellow River Estuary between September 1 and 10, 2009. By comparing of the accuracy of assessment of TSA, FSA, and UMSA algorithms, it was found that the UMSA algorithm had a superior performance in comparison with the two other algorithms, TSA and FSA. Using the UMSA algorithm in retrieving Chla concentration in the Yellow River Estuary decreased by 25.54% NRMSE (normalized root mean square error) when compared with the FSA algorithm, and 29.66% NRMSE in comparison with the TSA algorithm. These are very significant improvements upon previous methods. Additionally, the study revealed that the TSA and FSA algorithms are merely more specific forms of the UMSA algorithm. Owing to the special form of the UMSA algorithm, if the same bands were used for both the TSA and UMSA algorithms or FSA and UMSA algorithms, the UMSA algorithm would theoretically produce superior results in comparison with the TSA and FSA algorithms. Thus, good results may also be produced if the UMSA algorithm were to be applied for predicting Chla concentration for datasets of Gitelson et al. (2008) and Le et al. (2009).

  4. A new warfarin dosing algorithm including VKORC1 3730 G > A polymorphism: comparison with results obtained by other published algorithms.

    PubMed

    Cini, Michela; Legnani, Cristina; Cosmi, Benilde; Guazzaloca, Giuliana; Valdrè, Lelia; Frascaro, Mirella; Palareti, Gualtiero

    2012-08-01

    Warfarin dosing is affected by clinical and genetic variants, but the contribution of the genotype associated with warfarin resistance in pharmacogenetic algorithms has not been well assessed yet. We developed a new dosing algorithm including polymorphisms associated both with warfarin sensitivity and resistance in the Italian population, and its performance was compared with those of eight previously published algorithms. Clinical and genetic data (CYP2C9*2, CYP2C9*3, VKORC1 -1639 G > A, and VKORC1 3730 G > A) were used to elaborate the new algorithm. Derivation and validation groups comprised 55 (58.2% men, mean age 69 years) and 40 (57.5% men, mean age 70 years) patients, respectively, who were on stable anticoagulation therapy for at least 3 months with different oral anticoagulation therapy (OAT) indications. Performance of the new algorithm, evaluated with mean absolute error (MAE) defined as the absolute value of the difference between observed daily maintenance dose and predicted daily dose, correlation with the observed dose and R(2) value, was comparable with or slightly lower than that obtained using the other algorithms. The new algorithm could correctly assign 53.3%, 50.0%, and 57.1% of patients to the low (≤25 mg/week), intermediate (26-44 mg/week) and high (≥ 45 mg/week) dosing range, respectively. Our data showed a significant increase in predictive accuracy among patients requiring high warfarin dose compared with the other algorithms (ranging from 0% to 28.6%). The algorithm including VKORC1 3730 G > A, associated with warfarin resistance, allowed a more accurate identification of resistant patients who require higher warfarin dosage.

  5. Orion Guidance and Control Ascent Abort Algorithm Design and Performance Results

    NASA Technical Reports Server (NTRS)

    Proud, Ryan W.; Bendle, John R.; Tedesco, Mark B.; Hart, Jeremy J.

    2009-01-01

    During the ascent flight phase of NASA s Constellation Program, the Ares launch vehicle propels the Orion crew vehicle to an agreed to insertion target. If a failure occurs at any point in time during ascent then a system must be in place to abort the mission and return the crew to a safe landing with a high probability of success. To achieve continuous abort coverage one of two sets of effectors is used. Either the Launch Abort System (LAS), consisting of the Attitude Control Motor (ACM) and the Abort Motor (AM), or the Service Module (SM), consisting of SM Orion Main Engine (OME), Auxiliary (Aux) Jets, and Reaction Control System (RCS) jets, is used. The LAS effectors are used for aborts from liftoff through the first 30 seconds of second stage flight. The SM effectors are used from that point through Main Engine Cutoff (MECO). There are two distinct sets of Guidance and Control (G&C) algorithms that are designed to maximize the performance of these abort effectors. This paper will outline the necessary inputs to the G&C subsystem, the preliminary design of the G&C algorithms, the ability of the algorithms to predict what abort modes are achievable, and the resulting success of the abort system. Abort success will be measured against the Preliminary Design Review (PDR) abort performance metrics and overall performance will be reported. Finally, potential improvements to the G&C design will be discussed.

  6. A new image enhancement algorithm with applications to forestry stand mapping

    NASA Technical Reports Server (NTRS)

    Kan, E. P. F. (Principal Investigator); Lo, J. K.

    1975-01-01

    The author has identified the following significant results. Results show that the new algorithm produced cleaner classification maps in which holes of small predesignated sizes were eliminated and significant boundary information was preserved. These cleaner post-processed maps better resemble true life timber stand maps and are thus more usable products than the pre-post-processing ones: Compared to an accepted neighbor-checking post-processing technique, the new algorithm is more appropriate for timber stand mapping.

  7. Development, Comparisons and Evaluation of Aerosol Retrieval Algorithms

    NASA Astrophysics Data System (ADS)

    de Leeuw, G.; Holzer-Popp, T.; Aerosol-cci Team

    2011-12-01

    The Climate Change Initiative (cci) of the European Space Agency (ESA) has brought together a team of European Aerosol retrieval groups working on the development and improvement of aerosol retrieval algorithms. The goal of this cooperation is the development of methods to provide the best possible information on climate and climate change based on satellite observations. To achieve this, algorithms are characterized in detail as regards the retrieval approaches, the aerosol models used in each algorithm, cloud detection and surface treatment. A round-robin intercomparison of results from the various participating algorithms serves to identify the best modules or combinations of modules for each sensor. Annual global datasets including their uncertainties will then be produced and validated. The project builds on 9 existing algorithms to produce spectral aerosol optical depth (AOD and Ångström exponent) as well as other aerosol information; two instruments are included to provide the absorbing aerosol index (AAI) and stratospheric aerosol information. The algorithms included are: - 3 for ATSR (ORAC developed by RAL / Oxford university, ADV developed by FMI and the SU algorithm developed by Swansea University ) - 2 for MERIS (BAER by Bremen university and the ESA standard handled by HYGEOS) - 1 for POLDER over ocean (LOA) - 1 for synergetic retrieval (SYNAER by DLR ) - 1 for OMI retreival of the absorbing aerosol index with averaging kernel information (KNMI) - 1 for GOMOS stratospheric extinction profile retrieval (BIRA) The first seven algorithms aim at the retrieval of the AOD. However, each of the algorithms used differ in their approach, even for algorithms working with the same instrument such as ATSR or MERIS. To analyse the strengths and weaknesses of each algorithm several tests are made. The starting point for comparison and measurement of improvements is a retrieval run for 1 month, September 2008. The data from the same month are subsequently used for

  8. A Seed-Based Plant Propagation Algorithm: The Feeding Station Model

    PubMed Central

    Salhi, Abdellah

    2015-01-01

    The seasonal production of fruit and seeds is akin to opening a feeding station, such as a restaurant. Agents coming to feed on the fruit are like customers attending the restaurant; they arrive at a certain rate and get served at a certain rate following some appropriate processes. The same applies to birds and animals visiting and feeding on ripe fruit produced by plants such as the strawberry plant. This phenomenon underpins the seed dispersion of the plants. Modelling it as a queuing process results in a seed-based search/optimisation algorithm. This variant of the Plant Propagation Algorithm is described, analysed, tested on nontrivial problems, and compared with well established algorithms. The results are included. PMID:25821858

  9. Research on AHP decision algorithms based on BP algorithm

    NASA Astrophysics Data System (ADS)

    Ma, Ning; Guan, Jianhe

    2017-10-01

    Decision making is the thinking activity that people choose or judge, and scientific decision-making has always been a hot issue in the field of research. Analytic Hierarchy Process (AHP) is a simple and practical multi-criteria and multi-objective decision-making method that combines quantitative and qualitative and can show and calculate the subjective judgment in digital form. In the process of decision analysis using AHP method, the rationality of the two-dimensional judgment matrix has a great influence on the decision result. However, in dealing with the real problem, the judgment matrix produced by the two-dimensional comparison is often inconsistent, that is, it does not meet the consistency requirements. BP neural network algorithm is an adaptive nonlinear dynamic system. It has powerful collective computing ability and learning ability. It can perfect the data by constantly modifying the weights and thresholds of the network to achieve the goal of minimizing the mean square error. In this paper, the BP algorithm is used to deal with the consistency of the two-dimensional judgment matrix of the AHP.

  10. SPORT: An Algorithm for Divisible Load Scheduling with Result Collection on Heterogeneous Systems

    NASA Astrophysics Data System (ADS)

    Ghatpande, Abhay; Nakazato, Hidenori; Beaumont, Olivier; Watanabe, Hiroshi

    Divisible Load Theory (DLT) is an established mathematical framework to study Divisible Load Scheduling (DLS). However, traditional DLT does not address the scheduling of results back to source (i. e., result collection), nor does it comprehensively deal with system heterogeneity. In this paper, the DLSRCHETS (DLS with Result Collection on HET-erogeneous Systems) problem is addressed. The few papers to date that have dealt with DLSRCHETS, proposed simplistic LIFO (Last In, First Out) and FIFO (First In, First Out) type of schedules as solutions to DLSRCHETS. In this paper, a new polynomial time heuristic algorithm, SPORT (System Parameters based Optimized Result Transfer), is proposed as a solution to the DLSRCHETS problem. With the help of simulations, it is proved that the performance of SPORT is significantly better than existing algorithms. The other major contributions of this paper include, for the first time ever, (a) the derivation of the condition to identify the presence of idle time in a FIFO schedule for two processors, (b) the identification of the limiting condition for the optimality of FIFO and LIFO schedules for two processors, and (c) the introduction of the concept of equivalent processor in DLS for heterogeneous systems with result collection.

  11. Algorithm Estimates Microwave Water-Vapor Delay

    NASA Technical Reports Server (NTRS)

    Robinson, Steven E.

    1989-01-01

    Accuracy equals or exceeds conventional linear algorithms. "Profile" algorithm improved algorithm using water-vapor-radiometer data to produce estimates of microwave delays caused by water vapor in troposphere. Does not require site-specific and weather-dependent empirical parameters other than standard meteorological data, latitude, and altitude for use in conjunction with published standard atmospheric data. Basic premise of profile algorithm, wet-path delay approximated closely by solution to simplified version of nonlinear delay problem and generated numerically from each radiometer observation and simultaneous meteorological data.

  12. Improved document image segmentation algorithm using multiresolution morphology

    NASA Astrophysics Data System (ADS)

    Bukhari, Syed Saqib; Shafait, Faisal; Breuel, Thomas M.

    2011-01-01

    Page segmentation into text and non-text elements is an essential preprocessing step before optical character recognition (OCR) operation. In case of poor segmentation, an OCR classification engine produces garbage characters due to the presence of non-text elements. This paper describes modifications to the text/non-text segmentation algorithm presented by Bloomberg,1 which is also available in his open-source Leptonica library.2The modifications result in significant improvements and achieved better segmentation accuracy than the original algorithm for UW-III, UNLV, ICDAR 2009 page segmentation competition test images and circuit diagram datasets.

  13. Genetic Algorithms to Optimizatize Lecturer Assessment's Criteria

    NASA Astrophysics Data System (ADS)

    Jollyta, Deny; Johan; Hajjah, Alyauma

    2017-12-01

    The lecturer assessment criteria is used as a measurement of the lecturer's performance in a college environment. To determine the value for a criteriais complicated and often leads to doubt. The absence of a standard valuefor each assessment criteria will affect the final results of the assessment and become less presentational data for the leader of college in taking various policies relate to reward and punishment. The Genetic Algorithm comes as an algorithm capable of solving non-linear problems. Using chromosomes in the random initial population, one of the presentations is binary, evaluates the fitness function and uses crossover genetic operator and mutation to obtain the desired crossbreed. It aims to obtain the most optimum criteria values in terms of the fitness function of each chromosome. The training results show that Genetic Algorithm able to produce the optimal values of lecturer assessment criteria so that can be usedby the college as a standard value for lecturer assessment criteria.

  14. Kidney-inspired algorithm for optimization problems

    NASA Astrophysics Data System (ADS)

    Jaddi, Najmeh Sadat; Alvankarian, Jafar; Abdullah, Salwani

    2017-01-01

    In this paper, a population-based algorithm inspired by the kidney process in the human body is proposed. In this algorithm the solutions are filtered in a rate that is calculated based on the mean of objective functions of all solutions in the current population of each iteration. The filtered solutions as the better solutions are moved to filtered blood and the rest are transferred to waste representing the worse solutions. This is a simulation of the glomerular filtration process in the kidney. The waste solutions are reconsidered in the iterations if after applying a defined movement operator they satisfy the filtration rate, otherwise it is expelled from the waste solutions, simulating the reabsorption and excretion functions of the kidney. In addition, a solution assigned as better solution is secreted if it is not better than the worst solutions simulating the secreting process of blood in the kidney. After placement of all the solutions in the population, the best of them is ranked, the waste and filtered blood are merged to become a new population and the filtration rate is updated. Filtration provides the required exploitation while generating a new solution and reabsorption gives the necessary exploration for the algorithm. The algorithm is assessed by applying it on eight well-known benchmark test functions and compares the results with other algorithms in the literature. The performance of the proposed algorithm is better on seven out of eight test functions when it is compared with the most recent researches in literature. The proposed kidney-inspired algorithm is able to find the global optimum with less function evaluations on six out of eight test functions. A statistical analysis further confirms the ability of this algorithm to produce good-quality results.

  15. Refinements to HIRS CO2 Slicing Algorithm with Results Compared to CALIOP and MODIS

    NASA Astrophysics Data System (ADS)

    Frey, R.; Menzel, P.

    2012-12-01

    This poster reports on the refinement of a cloud top property algorithm using High-resolution Infrared Radiation Sounder (HIRS) measurements. The HIRS sensor has been flown on fifteen satellites from TIROS-N through NOAA-19 and MetOp-A forming a continuous 30 year cloud data record. Cloud Top Pressure and effective emissivity (cloud fraction multiplied by cloud emissivity) are derived using the 15 μm spectral bands in the CO2 absorption band, implementing the CO2 slicing technique which is strong for high semi-transparent clouds but weak for low clouds with little thermal contrast from clear skies. We report on algorithm adjustments suggested from MODIS cloud record validations and the inclusion of collocated AVHRR cloud fraction data from the PATMOS-x algorithm. Reprocessing results for 2008 are shown using NOAA-18 HIRS and collocated CALIOP data for validation, as well as comparisons to MODIS monthly mean values. Adjustments to the cloud algorithm include (a) using CO2 slicing for all ice and mixed phase clouds and infrared window determinations for all water clouds, (b) determining the cloud top pressure from the most opaque CO2 spectral band pair seeing the cloud, (c) reducing the cloud detection threshold for the CO2 slicing algorithm to include conditions of smaller radiance differences that are often due to thin ice clouds, and (d) identifying stratospheric clouds when an opaque band is warmer than a less opaque band.

  16. How Effective Is Algorithm-Guided Treatment for Depressed Inpatients? Results from the Randomized Controlled Multicenter German Algorithm Project 3 Trial

    PubMed Central

    Wiethoff, Katja; Baghai, Thomas C; Fisher, Robert; Seemüller, Florian; Laakmann, Gregor; Brieger, Peter; Cordes, Joachim; Malevani, Jaroslav; Laux, Gerd; Hauth, Iris; Möller, Hans-Jürgen; Kronmüller, Klaus-Thomas; Smolka, Michael N; Schlattmann, Peter; Berger, Maximilian; Ricken, Roland; Stamm, Thomas J; Heinz, Andreas; Bauer, Michael

    2017-01-01

    Abstract Background Treatment algorithms are considered as key to improve outcomes by enhancing the quality of care. This is the first randomized controlled study to evaluate the clinical effect of algorithm-guided treatment in inpatients with major depressive disorder. Methods Inpatients, aged 18 to 70 years with major depressive disorder from 10 German psychiatric departments were randomized to 5 different treatment arms (from 2000 to 2005), 3 of which were standardized stepwise drug treatment algorithms (ALGO). The fourth arm proposed medications and provided less specific recommendations based on a computerized documentation and expert system (CDES), the fifth arm received treatment as usual (TAU). ALGO included 3 different second-step strategies: lithium augmentation (ALGO LA), antidepressant dose-escalation (ALGO DE), and switch to a different antidepressant (ALGO SW). Time to remission (21-item Hamilton Depression Rating Scale ≤9) was the primary outcome. Results Time to remission was significantly shorter for ALGO DE (n=91) compared with both TAU (n=84) (HR=1.67; P=.014) and CDES (n=79) (HR=1.59; P=.031) and ALGO SW (n=89) compared with both TAU (HR=1.64; P=.018) and CDES (HR=1.56; P=.038). For both ALGO LA (n=86) and ALGO DE, fewer antidepressant medications were needed to achieve remission than for CDES or TAU (P<.001). Remission rates at discharge differed across groups; ALGO DE had the highest (89.2%) and TAU the lowest rates (66.2%). Conclusions A highly structured algorithm-guided treatment is associated with shorter times and fewer medication changes to achieve remission with depressed inpatients than treatment as usual or computerized medication choice guidance. PMID:28645191

  17. Algorithmic commonalities in the parallel environment

    NASA Technical Reports Server (NTRS)

    Mcanulty, Michael A.; Wainer, Michael S.

    1987-01-01

    The ultimate aim of this project was to analyze procedures from substantially different application areas to discover what is either common or peculiar in the process of conversion to the Massively Parallel Processor (MPP). Three areas were identified: molecular dynamic simulation, production systems (rule systems), and various graphics and vision algorithms. To date, only selected graphics procedures have been investigated. They are the most readily available, and produce the most visible results. These include simple polygon patch rendering, raycasting against a constructive solid geometric model, and stochastic or fractal based textured surface algorithms. Only the simplest of conversion strategies, mapping a major loop to the array, has been investigated so far. It is not entirely satisfactory.

  18. Comparative Evaluation of Registration Algorithms in Different Brain Databases With Varying Difficulty: Results and Insights

    PubMed Central

    Akbari, Hamed; Bilello, Michel; Da, Xiao; Davatzikos, Christos

    2015-01-01

    Evaluating various algorithms for the inter-subject registration of brain magnetic resonance images (MRI) is a necessary topic receiving growing attention. Existing studies evaluated image registration algorithms in specific tasks or using specific databases (e.g., only for skull-stripped images, only for single-site images, etc.). Consequently, the choice of registration algorithms seems task- and usage/parameter-dependent. Nevertheless, recent large-scale, often multi-institutional imaging-related studies create the need and raise the question whether some registration algorithms can 1) generally apply to various tasks/databases posing various challenges; 2) perform consistently well, and while doing so, 3) require minimal or ideally no parameter tuning. In seeking answers to this question, we evaluated 12 general-purpose registration algorithms, for their generality, accuracy and robustness. We fixed their parameters at values suggested by algorithm developers as reported in the literature. We tested them in 7 databases/tasks, which present one or more of 4 commonly-encountered challenges: 1) inter-subject anatomical variability in skull-stripped images; 2) intensity homogeneity, noise and large structural differences in raw images; 3) imaging protocol and field-of-view (FOV) differences in multi-site data; and 4) missing correspondences in pathology-bearing images. Totally 7,562 registrations were performed. Registration accuracies were measured by (multi-)expert-annotated landmarks or regions of interest (ROIs). To ensure reproducibility, we used public software tools, public databases (whenever possible), and we fully disclose the parameter settings. We show evaluation results, and discuss the performances in light of algorithms’ similarity metrics, transformation models and optimization strategies. We also discuss future directions for the algorithm development and evaluations. PMID:24951685

  19. The Aquarius Salinity Retrieval Algorithm: Early Results

    NASA Technical Reports Server (NTRS)

    Meissner, Thomas; Wentz, Frank J.; Lagerloef, Gary; LeVine, David

    2012-01-01

    The Aquarius L-band radiometer/scatterometer system is designed to provide monthly salinity maps at 150 km spatial scale to a 0.2 psu accuracy. The sensor was launched on June 10, 2011, aboard the Argentine CONAE SAC-D spacecraft. The L-band radiometers and the scatterometer have been taking science data observations since August 25, 2011. The first part of this presentation gives an overview over the Aquarius salinity retrieval algorithm. The instrument calibration converts Aquarius radiometer counts into antenna temperatures (TA). The salinity retrieval algorithm converts those TA into brightness temperatures (TB) at a flat ocean surface. As a first step, contributions arising from the intrusion of solar, lunar and galactic radiation are subtracted. The antenna pattern correction (APC) removes the effects of cross-polarization contamination and spillover. The Aquarius radiometer measures the 3rd Stokes parameter in addition to vertical (v) and horizontal (h) polarizations, which allows for an easy removal of ionospheric Faraday rotation. The atmospheric absorption at L-band is almost entirely due to O2, which can be calculated based on auxiliary input fields from numerical weather prediction models and then successively removed from the TB. The final step in the TA to TB conversion is the correction for the roughness of the sea surface due to wind. This is based on the radar backscatter measurements by the scatterometer. The TB of the flat ocean surface can now be matched to a salinity value using a surface emission model that is based on a model for the dielectric constant of sea water and an auxiliary field for the sea surface temperature. In the current processing (as of writing this abstract) only v-pol TB are used for this last process and NCEP winds are used for the roughness correction. Before the salinity algorithm can be operationally implemented and its accuracy assessed by comparing versus in situ measurements, an extensive calibration and validation

  20. Hydrodynamical simulation of detonations in superbursts. I. The hydrodynamical algorithm and some preliminary one-dimensional results

    NASA Astrophysics Data System (ADS)

    Noël, C.; Busegnies, Y.; Papalexandris, M. V.; Deledicque, V.; El Messoudi, A.

    2007-08-01

    Aims:This work presents a new hydrodynamical algorithm to study astrophysical detonations. A prime motivation of this development is the description of a carbon detonation in conditions relevant to superbursts, which are thought to result from the propagation of a detonation front around the surface of a neutron star in the carbon layer underlying the atmosphere. Methods: The algorithm we have developed is a finite-volume method inspired by the original MUSCL scheme of van Leer (1979). The algorithm is of second-order in the smooth part of the flow and avoids dimensional splitting. It is applied to some test cases, and the time-dependent results are compared to the corresponding steady state solution. Results: Our algorithm proves to be robust to test cases, and is considered to be reliably applicable to astrophysical detonations. The preliminary one-dimensional calculations we have performed demonstrate that the carbon detonation at the surface of a neutron star is a multiscale phenomenon. The length scale of liberation of energy is 106 times smaller than the total reaction length. We show that a multi-resolution approach can be used to solve all the reaction lengths. This result will be very useful in future multi-dimensional simulations. We present also thermodynamical and composition profiles after the passage of a detonation in a pure carbon or mixed carbon-iron layer, in thermodynamical conditions relevant to superbursts in pure helium accretor systems.

  1. SMOS L1PP Performance Analysis from Commissioning Phase - Improved Algorithms and Major Results

    NASA Astrophysics Data System (ADS)

    Castro, Rita; Oliva, Roger; Gutiérrez, Antonio; Barbosa, José; Catarino, Nuno; Martin-Neira, Manuel; Zundo, Michele; Cabot, François

    2010-05-01

    Following the Soil Moisture and Ocean Salinity (SMOS) launch in November 2009, a Commissioning Phase has taken place for six months, having Deimos closely cooperated with the European Space Agency's (ESA) Level 1 team. During these six months several studies have been conducted on calibration optimization, image reconstruction improvement, geolocation assessment and the impact on scientific results, in particular to insure optimal input to Level 2 Soil Moisture and Ocean Salinity retrieval. In parallel with the scientific studies, some new algorithms/mitigation techniques had to be developed, tested and implemented during the Commissioning Phase. Prior to launch, the Level 1 Prototype Processor (L1PP) included already several experimental algorithms different from the ones existent in the operational chain. These algorithms were tested during Commissioning and some were included in the final processing baseline as a result of the planned studies. Some unforeseen algorithms had to be defined, implemented and tested during the Commissioning Phase itself and these will also be described below. In L1a, for example, the calibration of the Power Measuring Systems (PMS) can be done using a cold target as reference, i.e., the Sky at ~3 K. This has been extensively analyzed and the results will be presented here. At least two linearity corrections to the PMS response function have been tested and compared. The deflection method was selected for inclusion on the operational chain and the results leading to this decision will be also presented. In Level 1B, all the foreign sources algorithms have been tested and validated using real data. The System Response Function (G-matrix) computed for different events has been analyzed and criteria for validation of its pseudo inverse, the J+ matrix, have been defined during the Commissioning Phase. The impact of errors in the J+ matrix has been studied and well characterized. The effects of the Flat Target Response (FTR) have also been

  2. Path Planning Algorithms for Autonomous Border Patrol Vehicles

    NASA Astrophysics Data System (ADS)

    Lau, George Tin Lam

    This thesis presents an online path planning algorithm developed for unmanned vehicles in charge of autonomous border patrol. In this Pursuit-Evasion game, the unmanned vehicle is required to capture multiple trespassers on its own before any of them reach a target safe house where they are safe from capture. The problem formulation is based on Isaacs' Target Guarding problem, but extended to the case of multiple evaders. The proposed path planning method is based on Rapidly-exploring random trees (RRT) and is capable of producing trajectories within several seconds to capture 2 or 3 evaders. Simulations are carried out to demonstrate that the resulting trajectories approach the optimal solution produced by a nonlinear programming-based numerical optimal control solver. Experiments are also conducted on unmanned ground vehicles to show the feasibility of implementing the proposed online path planning algorithm on physical applications.

  3. Determining the Effectiveness of Incorporating Geographic Information Into Vehicle Performance Algorithms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sera White

    2012-04-01

    This thesis presents a research study using one year of driving data obtained from plug-in hybrid electric vehicles (PHEV) located in Sacramento and San Francisco, California to determine the effectiveness of incorporating geographic information into vehicle performance algorithms. Sacramento and San Francisco were chosen because of the availability of high resolution (1/9 arc second) digital elevation data. First, I present a method for obtaining instantaneous road slope, given a latitude and longitude, and introduce its use into common driving intensity algorithms. I show that for trips characterized by >40m of net elevation change (from key on to key off), themore » use of instantaneous road slope significantly changes the results of driving intensity calculations. For trips exhibiting elevation loss, algorithms ignoring road slope overestimated driving intensity by as much as 211 Wh/mile, while for trips exhibiting elevation gain these algorithms underestimated driving intensity by as much as 333 Wh/mile. Second, I describe and test an algorithm that incorporates vehicle route type into computations of city and highway fuel economy. Route type was determined by intersecting trip GPS points with ESRI StreetMap road types and assigning each trip as either city or highway route type according to whichever road type comprised the largest distance traveled. The fuel economy results produced by the geographic classification were compared to the fuel economy results produced by algorithms that assign route type based on average speed or driving style. Most results were within 1 mile per gallon ({approx}3%) of one another; the largest difference was 1.4 miles per gallon for charge depleting highway trips. The methods for acquiring and using geographic data introduced in this thesis will enable other vehicle technology researchers to incorporate geographic data into their research problems.« less

  4. A post-processing algorithm for time domain pitch trackers

    NASA Astrophysics Data System (ADS)

    Specker, P.

    1983-01-01

    This paper describes a powerful post-processing algorithm for time-domain pitch trackers. On two successive passes, the post-processing algorithm eliminates errors produced during a first pass by a time-domain pitch tracker. During the second pass, incorrect pitch values are detected as outliers by computing the distribution of values over a sliding 80 msec window. During the third pass (based on artificial intelligence techniques), remaining pitch pulses are used as anchor points to reconstruct the pitch train from the original waveform. The algorithm produced a decrease in the error rate from 21% obtained with the original time domain pitch tracker to 2% for isolated words and sentences produced in an office environment by 3 male and 3 female talkers. In a noisy computer room errors decreased from 52% to 2.9% for the same stimuli produced by 2 male talkers. The algorithm is efficient, accurate, and resistant to noise. The fundamental frequency micro-structure is tracked sufficiently well to be used in extracting phonetic features in a feature-based recognition system.

  5. Selecting materialized views using random algorithm

    NASA Astrophysics Data System (ADS)

    Zhou, Lijuan; Hao, Zhongxiao; Liu, Chi

    2007-04-01

    The data warehouse is a repository of information collected from multiple possibly heterogeneous autonomous distributed databases. The information stored at the data warehouse is in form of views referred to as materialized views. The selection of the materialized views is one of the most important decisions in designing a data warehouse. Materialized views are stored in the data warehouse for the purpose of efficiently implementing on-line analytical processing queries. The first issue for the user to consider is query response time. So in this paper, we develop algorithms to select a set of views to materialize in data warehouse in order to minimize the total view maintenance cost under the constraint of a given query response time. We call it query_cost view_ selection problem. First, cost graph and cost model of query_cost view_ selection problem are presented. Second, the methods for selecting materialized views by using random algorithms are presented. The genetic algorithm is applied to the materialized views selection problem. But with the development of genetic process, the legal solution produced become more and more difficult, so a lot of solutions are eliminated and producing time of the solutions is lengthened in genetic algorithm. Therefore, improved algorithm has been presented in this paper, which is the combination of simulated annealing algorithm and genetic algorithm for the purpose of solving the query cost view selection problem. Finally, in order to test the function and efficiency of our algorithms experiment simulation is adopted. The experiments show that the given methods can provide near-optimal solutions in limited time and works better in practical cases. Randomized algorithms will become invaluable tools for data warehouse evolution.

  6. Generation of referring expressions: assessing the Incremental Algorithm.

    PubMed

    van Deemter, Kees; Gatt, Albert; van der Sluis, Ielka; Power, Richard

    2012-07-01

    A substantial amount of recent work in natural language generation has focused on the generation of ''one-shot'' referring expressions whose only aim is to identify a target referent. Dale and Reiter's Incremental Algorithm (IA) is often thought to be the best algorithm for maximizing the similarity to referring expressions produced by people. We test this hypothesis by eliciting referring expressions from human subjects and computing the similarity between the expressions elicited and the ones generated by algorithms. It turns out that the success of the IA depends substantially on the ''preference order'' (PO) employed by the IA, particularly in complex domains. While some POs cause the IA to produce referring expressions that are very similar to expressions produced by human subjects, others cause the IA to perform worse than its main competitors; moreover, it turns out to be difficult to predict the success of a PO on the basis of existing psycholinguistic findings or frequencies in corpora. We also examine the computational complexity of the algorithms in question and argue that there are no compelling reasons for preferring the IA over some of its main competitors on these grounds. We conclude that future research on the generation of referring expressions should explore alternatives to the IA, focusing on algorithms, inspired by the Greedy Algorithm, which do not work with a fixed PO. Copyright © 2011 Cognitive Science Society, Inc.

  7. Upper cervical injuries: Clinical results using a new treatment algorithm

    PubMed Central

    Joaquim, Andrei F.; Ghizoni, Enrico; Tedeschi, Helder; Yacoub, Alexandre R. D.; Brodke, Darrel S.; Vaccaro, Alexander R.; Patel, Alpesh A.

    2015-01-01

    Introduction: Upper cervical injuries (UCI) have a wide range of radiological and clinical presentation due to the unique complex bony, ligamentous and vascular anatomy. We recently proposed a rational approach in an attempt to unify prior classification system and guide treatment. In this paper, we evaluate the clinical results of our algorithm for UCI treatment. Materials and Methods: A prospective cohort series of patients with UCI was performed. The primary outcome was the AIS. Surgical treatment was proposed based on our protocol: Ligamentous injuries (abnormal misalignment, facet perched or locked, increase atlanto-dens interval) were treated surgically. Bone fractures without ligamentous injuries were treated with a rigid cervical orthosis, with exception of fractures in the dens base with risk factors for non-union. Results: Twenty-three patients treated initially conservatively had some follow-up (mean of 171 days, range from 60 to 436 days). All of them were neurologically intact. None of the patients developed a new neurological deficit. Fifteen patients were initially surgically treated (mean of 140 days of follow-up, ranging from 60 to 270 days). In the surgical group, preoperatively, 11 (73.3%) patients were AIS E, 2 (13.3%) AIS C and 2 (13.3%) AIS D. At the final follow-up, the American Spine Injury Association (ASIA) score was: 13 (86.6%) AIS E and 2 (13.3%) AIS D. None of the patients had neurological worsening during the follow-up. Conclusions: This prospective cohort suggested that our UCI treatment algorithm can be safely used. Further prospective studies with longer follow-up are necessary to further establish its clinical validity and safety. PMID:25788816

  8. Coupled Inertial Navigation and Flush Air Data Sensing Algorithm for Atmosphere Estimation

    NASA Technical Reports Server (NTRS)

    Karlgaard, Christopher D.; Kutty, Prasad; Schoenenberger, Mark

    2015-01-01

    This paper describes an algorithm for atmospheric state estimation that is based on a coupling between inertial navigation and flush air data sensing pressure measurements. In this approach, the full navigation state is used in the atmospheric estimation algorithm along with the pressure measurements and a model of the surface pressure distribution to directly estimate atmospheric winds and density using a nonlinear weighted least-squares algorithm. The approach uses a high fidelity model of atmosphere stored in table-look-up form, along with simplified models of that are propagated along the trajectory within the algorithm to provide prior estimates and covariances to aid the air data state solution. Thus, the method is essentially a reduced-order Kalman filter in which the inertial states are taken from the navigation solution and atmospheric states are estimated in the filter. The algorithm is applied to data from the Mars Science Laboratory entry, descent, and landing from August 2012. Reasonable estimates of the atmosphere and winds are produced by the algorithm. The observability of winds along the trajectory are examined using an index based on the discrete-time observability Gramian and the pressure measurement sensitivity matrix. The results indicate that bank reversals are responsible for adding information content to the system. The algorithm is then applied to the design of the pressure measurement system for the Mars 2020 mission. The pressure port layout is optimized to maximize the observability of atmospheric states along the trajectory. Linear covariance analysis is performed to assess estimator performance for a given pressure measurement uncertainty. The results indicate that the new tightly-coupled estimator can produce enhanced estimates of atmospheric states when compared with existing algorithms.

  9. A generalized global alignment algorithm.

    PubMed

    Huang, Xiaoqiu; Chao, Kun-Mao

    2003-01-22

    Homologous sequences are sometimes similar over some regions but different over other regions. Homologous sequences have a much lower global similarity if the different regions are much longer than the similar regions. We present a generalized global alignment algorithm for comparing sequences with intermittent similarities, an ordered list of similar regions separated by different regions. A generalized global alignment model is defined to handle sequences with intermittent similarities. A dynamic programming algorithm is designed to compute an optimal general alignment in time proportional to the product of sequence lengths and in space proportional to the sum of sequence lengths. The algorithm is implemented as a computer program named GAP3 (Global Alignment Program Version 3). The generalized global alignment model is validated by experimental results produced with GAP3 on both DNA and protein sequences. The GAP3 program extends the ability of standard global alignment programs to recognize homologous sequences of lower similarity. The GAP3 program is freely available for academic use at http://bioinformatics.iastate.edu/aat/align/align.html.

  10. SST algorithm based on radiative transfer model

    NASA Astrophysics Data System (ADS)

    Mat Jafri, Mohd Z.; Abdullah, Khiruddin; Bahari, Alui

    2001-03-01

    An algorithm for measuring sea surface temperature (SST) without recourse to the in-situ data for calibration has been proposed. The algorithm which is based on the recorded infrared signal by the satellite sensor is composed of three terms, namely, the surface emission, the up-welling radiance emitted by the atmosphere, and the down-welling atmospheric radiance reflected at the sea surface. This algorithm requires the transmittance values of thermal bands. The angular dependence of the transmittance function was modeled using the MODTRAN code. Radiosonde data were used with the MODTRAN code. The expression of transmittance as a function of zenith view angle was obtained for each channel through regression of the MODTRAN output. The Ocean Color Temperature Scanner (OCTS) data from the Advanced Earth Observation Satellite (ADEOS) were used in this study. The study area covers the seas of the North West of Peninsular Malaysia region. The in-situ data (ship collected SST values) were used for verification of the results. Cloud contaminated pixels were masked out using the standard procedures which have been applied to the Advanced Very High Resolution Radiometer (AVHRR) data. The cloud free pixels at the in-situ sites were extracted for analysis. The OCTS data were then substituted in the proposed algorithm. The appropriate transmittance value for each channel was then assigned in the calculation. Assessment for the accuracy was made by observing the correlation and the rms deviations between the computed and the ship collected values. The results were also compared with the results from OCTS multi- channel sea surface temperature algorithm. The comparison produced high correlation values. The performance of this algorithm is comparable with the established OCTS algorithm. The effect of emissivity on the retrieved SST values was also investigated. SST map was generated and contoured manually.

  11. Approximate string matching algorithms for limited-vocabulary OCR output correction

    NASA Astrophysics Data System (ADS)

    Lasko, Thomas A.; Hauser, Susan E.

    2000-12-01

    Five methods for matching words mistranslated by optical character recognition to their most likely match in a reference dictionary were tested on data from the archives of the National Library of Medicine. The methods, including an adaptation of the cross correlation algorithm, the generic edit distance algorithm, the edit distance algorithm with a probabilistic substitution matrix, Bayesian analysis, and Bayesian analysis on an actively thinned reference dictionary were implemented and their accuracy rates compared. Of the five, the Bayesian algorithm produced the most correct matches (87%), and had the advantage of producing scores that have a useful and practical interpretation.

  12. Mapped Landmark Algorithm for Precision Landing

    NASA Technical Reports Server (NTRS)

    Johnson, Andrew; Ansar, Adnan; Matthies, Larry

    2007-01-01

    A report discusses a computer vision algorithm for position estimation to enable precision landing during planetary descent. The Descent Image Motion Estimation System for the Mars Exploration Rovers has been used as a starting point for creating code for precision, terrain-relative navigation during planetary landing. The algorithm is designed to be general because it handles images taken at different scales and resolutions relative to the map, and can produce mapped landmark matches for any planetary terrain of sufficient texture. These matches provide a measurement of horizontal position relative to a known landing site specified on the surface map. Multiple mapped landmarks generated per image allow for automatic detection and elimination of bad matches. Attitude and position can be generated from each image; this image-based attitude measurement can be used by the onboard navigation filter to improve the attitude estimate, which will improve the position estimates. The algorithm uses normalized correlation of grayscale images, producing precise, sub-pixel images. The algorithm has been broken into two sub-algorithms: (1) FFT Map Matching (see figure), which matches a single large template by correlation in the frequency domain, and (2) Mapped Landmark Refinement, which matches many small templates by correlation in the spatial domain. Each relies on feature selection, the homography transform, and 3D image correlation. The algorithm is implemented in C++ and is rated at Technology Readiness Level (TRL) 4.

  13. Human vision-based algorithm to hide defective pixels in LCDs

    NASA Astrophysics Data System (ADS)

    Kimpe, Tom; Coulier, Stefaan; Van Hoey, Gert

    2006-02-01

    Producing displays without pixel defects or repairing defective pixels is technically not possible at this moment. This paper presents a new approach to solve this problem: defects are made invisible for the user by using image processing algorithms based on characteristics of the human eye. The performance of this new algorithm has been evaluated using two different methods. First of all the theoretical response of the human eye was analyzed on a series of images and this before and after applying the defective pixel compensation algorithm. These results show that indeed it is possible to mask a defective pixel. A second method was to perform a psycho-visual test where users were asked whether or not a defective pixel could be perceived. The results of these user tests also confirm the value of the new algorithm. Our "defective pixel correction" algorithm can be implemented very efficiently and cost-effectively as pixel-dataprocessing algorithms inside the display in for instance an FPGA, a DSP or a microprocessor. The described techniques are also valid for both monochrome and color displays ranging from high-quality medical displays to consumer LCDTV applications.

  14. Mars Entry Atmospheric Data System Trajectory Reconstruction Algorithms and Flight Results

    NASA Technical Reports Server (NTRS)

    Karlgaard, Christopher D.; Kutty, Prasad; Schoenenberger, Mark; Shidner, Jeremy; Munk, Michelle

    2013-01-01

    The Mars Entry Atmospheric Data System is a part of the Mars Science Laboratory, Entry, Descent, and Landing Instrumentation project. These sensors are a system of seven pressure transducers linked to ports on the entry vehicle forebody to record the pressure distribution during atmospheric entry. These measured surface pressures are used to generate estimates of atmospheric quantities based on modeled surface pressure distributions. Specifically, angle of attack, angle of sideslip, dynamic pressure, Mach number, and freestream atmospheric properties are reconstructed from the measured pressures. Such data allows for the aerodynamics to become decoupled from the assumed atmospheric properties, allowing for enhanced trajectory reconstruction and performance analysis as well as an aerodynamic reconstruction, which has not been possible in past Mars entry reconstructions. This paper provides details of the data processing algorithms that are utilized for this purpose. The data processing algorithms include two approaches that have commonly been utilized in past planetary entry trajectory reconstruction, and a new approach for this application that makes use of the pressure measurements. The paper describes assessments of data quality and preprocessing, and results of the flight data reduction from atmospheric entry, which occurred on August 5th, 2012.

  15. Project resource reallocation algorithm

    NASA Technical Reports Server (NTRS)

    Myers, J. E.

    1981-01-01

    A methodology for adjusting baseline cost estimates according to project schedule changes is described. An algorithm which performs a linear expansion or contraction of the baseline project resource distribution in proportion to the project schedule expansion or contraction is presented. Input to the algorithm consists of the deck of cards (PACE input data) prepared for the baseline project schedule as well as a specification of the nature of the baseline schedule change. Output of the algorithm is a new deck of cards with all work breakdown structure block and element of cost estimates redistributed for the new project schedule. This new deck can be processed through PACE to produce a detailed cost estimate for the new schedule.

  16. Advisory Algorithm for Scheduling Open Sectors, Operating Positions, and Workstations

    NASA Technical Reports Server (NTRS)

    Bloem, Michael; Drew, Michael; Lai, Chok Fung; Bilimoria, Karl D.

    2012-01-01

    Air traffic controller supervisors configure available sector, operating position, and work-station resources to safely and efficiently control air traffic in a region of airspace. In this paper, an algorithm for assisting supervisors with this task is described and demonstrated on two sample problem instances. The algorithm produces configuration schedule advisories that minimize a cost. The cost is a weighted sum of two competing costs: one penalizing mismatches between configurations and predicted air traffic demand and another penalizing the effort associated with changing configurations. The problem considered by the algorithm is a shortest path problem that is solved with a dynamic programming value iteration algorithm. The cost function contains numerous parameters. Default values for most of these are suggested based on descriptions of air traffic control procedures and subject-matter expert feedback. The parameter determining the relative importance of the two competing costs is tuned by comparing historical configurations with corresponding algorithm advisories. Two sample problem instances for which appropriate configuration advisories are obvious were designed to illustrate characteristics of the algorithm. Results demonstrate how the algorithm suggests advisories that appropriately utilize changes in airspace configurations and changes in the number of operating positions allocated to each open sector. The results also demonstrate how the advisories suggest appropriate times for configuration changes.

  17. NASA, Navy, and AES/York sea ice concentration comparison of SSM/I algorithms with SAR derived values

    NASA Technical Reports Server (NTRS)

    Jentz, R. R.; Wackerman, C. C.; Shuchman, R. A.; Onstott, R. G.; Gloersen, Per; Cavalieri, Don; Ramseier, Rene; Rubinstein, Irene; Comiso, Joey; Hollinger, James

    1991-01-01

    Previous research studies have focused on producing algorithms for extracting geophysical information from passive microwave data regarding ice floe size, sea ice concentration, open water lead locations, and sea ice extent. These studies have resulted in four separate algorithms for extracting these geophysical parameters. Sea ice concentration estimates generated from each of these algorithms (i.e., NASA/Team, NASA/Comiso, AES/York, and Navy) are compared to ice concentration estimates produced from coincident high-resolution synthetic aperture radar (SAR) data. The SAR concentration estimates are produced from data collected in both the Beaufort Sea and the Greenland Sea in March 1988 and March 1989, respectively. The SAR data are coincident to the passive microwave data generated by the Special Sensor Microwave/Imager (SSM/I).

  18. Algorithms of maximum likelihood data clustering with applications

    NASA Astrophysics Data System (ADS)

    Giada, Lorenzo; Marsili, Matteo

    2002-12-01

    We address the problem of data clustering by introducing an unsupervised, parameter-free approach based on maximum likelihood principle. Starting from the observation that data sets belonging to the same cluster share a common information, we construct an expression for the likelihood of any possible cluster structure. The likelihood in turn depends only on the Pearson's coefficient of the data. We discuss clustering algorithms that provide a fast and reliable approximation to maximum likelihood configurations. Compared to standard clustering methods, our approach has the advantages that (i) it is parameter free, (ii) the number of clusters need not be fixed in advance and (iii) the interpretation of the results is transparent. In order to test our approach and compare it with standard clustering algorithms, we analyze two very different data sets: time series of financial market returns and gene expression data. We find that different maximization algorithms produce similar cluster structures whereas the outcome of standard algorithms has a much wider variability.

  19. Algorithm for Simulating Atmospheric Turbulence and Aeroelastic Effects on Simulator Motion Systems

    NASA Technical Reports Server (NTRS)

    Ercole, Anthony V.; Cardullo, Frank M.; Kelly, Lon C.; Houck, Jacob A.

    2012-01-01

    Atmospheric turbulence produces high frequency accelerations in aircraft, typically greater than the response to pilot input. Motion system equipped flight simulators must present cues representative of the aircraft response to turbulence in order to maintain the integrity of the simulation. Currently, turbulence motion cueing produced by flight simulator motion systems has been less than satisfactory because the turbulence profiles have been attenuated by the motion cueing algorithms. This report presents a new turbulence motion cueing algorithm, referred to as the augmented turbulence channel. Like the previous turbulence algorithms, the output of the channel only augments the vertical degree of freedom of motion. This algorithm employs a parallel aircraft model and an optional high bandwidth cueing filter. Simulation of aeroelastic effects is also an area where frequency content must be preserved by the cueing algorithm. The current aeroelastic implementation uses a similar secondary channel that supplements the primary motion cue. Two studies were conducted using the NASA Langley Visual Motion Simulator and Cockpit Motion Facility to evaluate the effect of the turbulence channel and aeroelastic model on pilot control input. Results indicate that the pilot is better correlated with the aircraft response, when the augmented channel is in place.

  20. High-speed cell recognition algorithm for ultrafast flow cytometer imaging system

    NASA Astrophysics Data System (ADS)

    Zhao, Wanyue; Wang, Chao; Chen, Hongwei; Chen, Minghua; Yang, Sigang

    2018-04-01

    An optical time-stretch flow imaging system enables high-throughput examination of cells/particles with unprecedented high speed and resolution. A significant amount of raw image data is produced. A high-speed cell recognition algorithm is, therefore, highly demanded to analyze large amounts of data efficiently. A high-speed cell recognition algorithm consisting of two-stage cascaded detection and Gaussian mixture model (GMM) classification is proposed. The first stage of detection extracts cell regions. The second stage integrates distance transform and the watershed algorithm to separate clustered cells. Finally, the cells detected are classified by GMM. We compared the performance of our algorithm with support vector machine. Results show that our algorithm increases the running speed by over 150% without sacrificing the recognition accuracy. This algorithm provides a promising solution for high-throughput and automated cell imaging and classification in the ultrafast flow cytometer imaging platform.

  1. Genetic algorithms and their use in Geophysical Problems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Parker, Paul B.

    1999-04-01

    Genetic algorithms (GAs), global optimization methods that mimic Darwinian evolution are well suited to the nonlinear inverse problems of geophysics. A standard genetic algorithm selects the best or ''fittest'' models from a ''population'' and then applies operators such as crossover and mutation in order to combine the most successful characteristics of each model and produce fitter models. More sophisticated operators have been developed, but the standard GA usually provides a robust and efficient search. Although the choice of parameter settings such as crossover and mutation rate may depend largely on the type of problem being solved, numerous results show thatmore » certain parameter settings produce optimal performance for a wide range of problems and difficulties. In particular, a low (about half of the inverse of the population size) mutation rate is crucial for optimal results, but the choice of crossover method and rate do not seem to affect performance appreciably. Optimal efficiency is usually achieved with smaller (< 50) populations. Lastly, tournament selection appears to be the best choice of selection methods due to its simplicity and its autoscaling properties. However, if a proportional selection method is used such as roulette wheel selection, fitness scaling is a necessity, and a high scaling factor (> 2.0) should be used for the best performance. Three case studies are presented in which genetic algorithms are used to invert for crustal parameters. The first is an inversion for basement depth at Yucca mountain using gravity data, the second an inversion for velocity structure in the crust of the south island of New Zealand using receiver functions derived from teleseismic events, and the third is a similar receiver function inversion for crustal velocities beneath the Mendocino Triple Junction region of Northern California. The inversions demonstrate that genetic algorithms are effective in solving problems with reasonably large numbers of

  2. Genetic algorithms and their use in geophysical problems

    NASA Astrophysics Data System (ADS)

    Parker, Paul Bradley

    Genetic algorithms (GAs), global optimization methods that mimic Darwinian evolution are well suited to the nonlinear inverse problems of geophysics. A standard genetic algorithm selects the best or "fittest" models from a "population" and then applies operators such as crossover and mutation in order to combine the most successful characteristics of each model and produce fitter models. More sophisticated operators have been developed, but the standard GA usually provides a robust and efficient search. Although the choice of parameter settings such as crossover and mutation rate may depend largely on the type of problem being solved, numerous results show that certain parameter settings produce optimal performance for a wide range of problems and difficulties. In particular, a low (about half of the inverse of the population size) mutation rate is crucial for optimal results, but the choice of crossover method and rate do not seem to affect performance appreciably. Also, optimal efficiency is usually achieved with smaller (<50) populations. Lastly, tournament selection appears to be the best choice of selection methods due to its simplicity and its autoscaling properties. However, if a proportional selection method is used such as roulette wheel selection, fitness scaling is a necessity, and a high scaling factor (>2.0) should be used for the best performance. Three case studies are presented in which genetic algorithms are used to invert for crustal parameters. The first is an inversion for basement depth at Yucca mountain using gravity data, the second an inversion for velocity structure in the crust of the south island of New Zealand using receiver functions derived from teleseismic events, and the third is a similar receiver function inversion for crustal velocities beneath the Mendocino Triple Junction region of Northern California. The inversions demonstrate that genetic algorithms are effective in solving problems with reasonably large numbers of free

  3. Learning-based meta-algorithm for MRI brain extraction.

    PubMed

    Shi, Feng; Wang, Li; Gilmore, John H; Lin, Weili; Shen, Dinggang

    2011-01-01

    Multiple-segmentation-and-fusion method has been widely used for brain extraction, tissue segmentation, and region of interest (ROI) localization. However, such studies are hindered in practice by their computational complexity, mainly coming from the steps of template selection and template-to-subject nonlinear registration. In this study, we address these two issues and propose a novel learning-based meta-algorithm for MRI brain extraction. Specifically, we first use exemplars to represent the entire template library, and assign the most similar exemplar to the test subject. Second, a meta-algorithm combining two existing brain extraction algorithms (BET and BSE) is proposed to conduct multiple extractions directly on test subject. Effective parameter settings for the meta-algorithm are learned from the training data and propagated to subject through exemplars. We further develop a level-set based fusion method to combine multiple candidate extractions together with a closed smooth surface, for obtaining the final result. Experimental results show that, with only a small portion of subjects for training, the proposed method is able to produce more accurate and robust brain extraction results, at Jaccard Index of 0.956 +/- 0.010 on total 340 subjects under 6-fold cross validation, compared to those by the BET and BSE even using their best parameter combinations.

  4. Genetic algorithms

    NASA Technical Reports Server (NTRS)

    Wang, Lui; Bayer, Steven E.

    1991-01-01

    Genetic algorithms are mathematical, highly parallel, adaptive search procedures (i.e., problem solving methods) based loosely on the processes of natural genetics and Darwinian survival of the fittest. Basic genetic algorithms concepts are introduced, genetic algorithm applications are introduced, and results are presented from a project to develop a software tool that will enable the widespread use of genetic algorithm technology.

  5. Study of parameter identification using hybrid neural-genetic algorithm in electro-hydraulic servo system

    NASA Astrophysics Data System (ADS)

    Moon, Byung-Young

    2005-12-01

    The hybrid neural-genetic multi-model parameter estimation algorithm was demonstrated. This method can be applied to structured system identification of electro-hydraulic servo system. This algorithms consist of a recurrent incremental credit assignment(ICRA) neural network and a genetic algorithm. The ICRA neural network evaluates each member of a generation of model and genetic algorithm produces new generation of model. To evaluate the proposed method, electro-hydraulic servo system was designed and manufactured. The experiment was carried out to figure out the hybrid neural-genetic multi-model parameter estimation algorithm. As a result, the dynamic characteristics were obtained such as the parameters(mass, damping coefficient, bulk modulus, spring coefficient), which minimize total square error. The result of this study can be applied to hydraulic systems in industrial fields.

  6. Stride search: A general algorithm for storm detection in high resolution climate data

    DOE PAGES

    Bosler, Peter Andrew; Roesler, Erika Louise; Taylor, Mark A.; ...

    2015-09-08

    This article discusses the problem of identifying extreme climate events such as intense storms within large climate data sets. The basic storm detection algorithm is reviewed, which splits the problem into two parts: a spatial search followed by a temporal correlation problem. Two specific implementations of the spatial search algorithm are compared. The commonly used grid point search algorithm is reviewed, and a new algorithm called Stride Search is introduced. Stride Search is designed to work at all latitudes, while grid point searches may fail in polar regions. Results from the two algorithms are compared for the application of tropicalmore » cyclone detection, and shown to produce similar results for the same set of storm identification criteria. The time required for both algorithms to search the same data set is compared. Furthermore, Stride Search's ability to search extreme latitudes is demonstrated for the case of polar low detection.« less

  7. Alocomotino Control Algorithm for Robotic Linkage Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dohner, Jeffrey L.

    This dissertation describes the development of a control algorithm that transitions a robotic linkage system between stabilized states producing responsive locomotion. The developed algorithm is demonstrated using a simple robotic construction consisting of a few links with actuation and sensing at each joint. Numerical and experimental validation is presented.

  8. Advancements to the planogram frequency–distance rebinning algorithm

    PubMed Central

    Champley, Kyle M; Raylman, Raymond R; Kinahan, Paul E

    2010-01-01

    In this paper we consider the task of image reconstruction in positron emission tomography (PET) with the planogram frequency–distance rebinning (PFDR) algorithm. The PFDR algorithm is a rebinning algorithm for PET systems with panel detectors. The algorithm is derived in the planogram coordinate system which is a native data format for PET systems with panel detectors. A rebinning algorithm averages over the redundant four-dimensional set of PET data to produce a three-dimensional set of data. Images can be reconstructed from this rebinned three-dimensional set of data. This process enables one to reconstruct PET images more quickly than reconstructing directly from the four-dimensional PET data. The PFDR algorithm is an approximate rebinning algorithm. We show that implementing the PFDR algorithm followed by the (ramp) filtered backprojection (FBP) algorithm in linogram coordinates from multiple views reconstructs a filtered version of our image. We develop an explicit formula for this filter which can be used to achieve exact reconstruction by means of a modified FBP algorithm applied to the stack of rebinned linograms and can also be used to quantify the errors introduced by the PFDR algorithm. This filter is similar to the filter in the planogram filtered backprojection algorithm derived by Brasse et al. The planogram filtered backprojection and exact reconstruction with the PFDR algorithm require complete projections which can be completed with a reprojection algorithm. The PFDR algorithm is similar to the rebinning algorithm developed by Kao et al. By expressing the PFDR algorithm in detector coordinates, we provide a comparative analysis between the two algorithms. Numerical experiments using both simulated data and measured data from a positron emission mammography/tomography (PEM/PET) system are performed. Images are reconstructed by PFDR+FBP (PFDR followed by 2D FBP reconstruction), PFDRX (PFDR followed by the modified FBP algorithm for exact

  9. An Improved SoC Test Scheduling Method Based on Simulated Annealing Algorithm

    NASA Astrophysics Data System (ADS)

    Zheng, Jingjing; Shen, Zhihang; Gao, Huaien; Chen, Bianna; Zheng, Weida; Xiong, Xiaoming

    2017-02-01

    In this paper, we propose an improved SoC test scheduling method based on simulated annealing algorithm (SA). It is our first to disorganize IP core assignment for each TAM to produce a new solution for SA, allocate TAM width for each TAM using greedy algorithm and calculate corresponding testing time. And accepting the core assignment according to the principle of simulated annealing algorithm and finally attain the optimum solution. Simultaneously, we run the test scheduling experiment with the international reference circuits provided by International Test Conference 2002(ITC’02) and the result shows that our algorithm is superior to the conventional integer linear programming algorithm (ILP), simulated annealing algorithm (SA) and genetic algorithm(GA). When TAM width reaches to 48,56 and 64, the testing time based on our algorithm is lesser than the classic methods and the optimization rates are 30.74%, 3.32%, 16.13% respectively. Moreover, the testing time based on our algorithm is very close to that of improved genetic algorithm (IGA), which is state-of-the-art at present.

  10. An improved hybrid of particle swarm optimization and the gravitational search algorithm to produce a kinetic parameter estimation of aspartate biochemical pathways.

    PubMed

    Ismail, Ahmad Muhaimin; Mohamad, Mohd Saberi; Abdul Majid, Hairudin; Abas, Khairul Hamimah; Deris, Safaai; Zaki, Nazar; Mohd Hashim, Siti Zaiton; Ibrahim, Zuwairie; Remli, Muhammad Akmal

    2017-12-01

    Mathematical modelling is fundamental to understand the dynamic behavior and regulation of the biochemical metabolisms and pathways that are found in biological systems. Pathways are used to describe complex processes that involve many parameters. It is important to have an accurate and complete set of parameters that describe the characteristics of a given model. However, measuring these parameters is typically difficult and even impossible in some cases. Furthermore, the experimental data are often incomplete and also suffer from experimental noise. These shortcomings make it challenging to identify the best-fit parameters that can represent the actual biological processes involved in biological systems. Computational approaches are required to estimate these parameters. The estimation is converted into multimodal optimization problems that require a global optimization algorithm that can avoid local solutions. These local solutions can lead to a bad fit when calibrating with a model. Although the model itself can potentially match a set of experimental data, a high-performance estimation algorithm is required to improve the quality of the solutions. This paper describes an improved hybrid of particle swarm optimization and the gravitational search algorithm (IPSOGSA) to improve the efficiency of a global optimum (the best set of kinetic parameter values) search. The findings suggest that the proposed algorithm is capable of narrowing down the search space by exploiting the feasible solution areas. Hence, the proposed algorithm is able to achieve a near-optimal set of parameters at a fast convergence speed. The proposed algorithm was tested and evaluated based on two aspartate pathways that were obtained from the BioModels Database. The results show that the proposed algorithm outperformed other standard optimization algorithms in terms of accuracy and near-optimal kinetic parameter estimation. Nevertheless, the proposed algorithm is only expected to work well in

  11. The coral reefs optimization algorithm: a novel metaheuristic for efficiently solving optimization problems.

    PubMed

    Salcedo-Sanz, S; Del Ser, J; Landa-Torres, I; Gil-López, S; Portilla-Figueras, J A

    2014-01-01

    This paper presents a novel bioinspired algorithm to tackle complex optimization problems: the coral reefs optimization (CRO) algorithm. The CRO algorithm artificially simulates a coral reef, where different corals (namely, solutions to the optimization problem considered) grow and reproduce in coral colonies, fighting by choking out other corals for space in the reef. This fight for space, along with the specific characteristics of the corals' reproduction, produces a robust metaheuristic algorithm shown to be powerful for solving hard optimization problems. In this research the CRO algorithm is tested in several continuous and discrete benchmark problems, as well as in practical application scenarios (i.e., optimum mobile network deployment and off-shore wind farm design). The obtained results confirm the excellent performance of the proposed algorithm and open line of research for further application of the algorithm to real-world problems.

  12. The Coral Reefs Optimization Algorithm: A Novel Metaheuristic for Efficiently Solving Optimization Problems

    PubMed Central

    Salcedo-Sanz, S.; Del Ser, J.; Landa-Torres, I.; Gil-López, S.; Portilla-Figueras, J. A.

    2014-01-01

    This paper presents a novel bioinspired algorithm to tackle complex optimization problems: the coral reefs optimization (CRO) algorithm. The CRO algorithm artificially simulates a coral reef, where different corals (namely, solutions to the optimization problem considered) grow and reproduce in coral colonies, fighting by choking out other corals for space in the reef. This fight for space, along with the specific characteristics of the corals' reproduction, produces a robust metaheuristic algorithm shown to be powerful for solving hard optimization problems. In this research the CRO algorithm is tested in several continuous and discrete benchmark problems, as well as in practical application scenarios (i.e., optimum mobile network deployment and off-shore wind farm design). The obtained results confirm the excellent performance of the proposed algorithm and open line of research for further application of the algorithm to real-world problems. PMID:25147860

  13. A fast non-local means algorithm based on integral image and reconstructed similar kernel

    NASA Astrophysics Data System (ADS)

    Lin, Zheng; Song, Enmin

    2018-03-01

    Image denoising is one of the essential methods in digital image processing. The non-local means (NLM) denoising approach is a remarkable denoising technique. However, its time complexity of the computation is high. In this paper, we design a fast NLM algorithm based on integral image and reconstructed similar kernel. First, the integral image is introduced in the traditional NLM algorithm. In doing so, it reduces a great deal of repetitive operations in the parallel processing, which will greatly improves the running speed of the algorithm. Secondly, in order to amend the error of the integral image, we construct a similar window resembling the Gaussian kernel in the pyramidal stacking pattern. Finally, in order to eliminate the influence produced by replacing the Gaussian weighted Euclidean distance with Euclidean distance, we propose a scheme to construct a similar kernel with a size of 3 x 3 in a neighborhood window which will reduce the effect of noise on a single pixel. Experimental results demonstrate that the proposed algorithm is about seventeen times faster than the traditional NLM algorithm, yet produce comparable results in terms of Peak Signal-to- Noise Ratio (the PSNR increased 2.9% in average) and perceptual image quality.

  14. Producing approximate answers to database queries

    NASA Technical Reports Server (NTRS)

    Vrbsky, Susan V.; Liu, Jane W. S.

    1993-01-01

    We have designed and implemented a query processor, called APPROXIMATE, that makes approximate answers available if part of the database is unavailable or if there is not enough time to produce an exact answer. The accuracy of the approximate answers produced improves monotonically with the amount of data retrieved to produce the result. The exact answer is produced if all of the needed data are available and query processing is allowed to continue until completion. The monotone query processing algorithm of APPROXIMATE works within the standard relational algebra framework and can be implemented on a relational database system with little change to the relational architecture. We describe here the approximation semantics of APPROXIMATE that serves as the basis for meaningful approximations of both set-valued and single-valued queries. We show how APPROXIMATE is implemented to make effective use of semantic information, provided by an object-oriented view of the database, and describe the additional overhead required by APPROXIMATE.

  15. Multiscale computations with a wavelet-adaptive algorithm

    NASA Astrophysics Data System (ADS)

    Rastigejev, Yevgenii Anatolyevich

    A wavelet-based adaptive multiresolution algorithm for the numerical solution of multiscale problems governed by partial differential equations is introduced. The main features of the method include fast algorithms for the calculation of wavelet coefficients and approximation of derivatives on nonuniform stencils. The connection between the wavelet order and the size of the stencil is established. The algorithm is based on the mathematically well established wavelet theory. This allows us to provide error estimates of the solution which are used in conjunction with an appropriate threshold criteria to adapt the collocation grid. The efficient data structures for grid representation as well as related computational algorithms to support grid rearrangement procedure are developed. The algorithm is applied to the simulation of phenomena described by Navier-Stokes equations. First, we undertake the study of the ignition and subsequent viscous detonation of a H2 : O2 : Ar mixture in a one-dimensional shock tube. Subsequently, we apply the algorithm to solve the two- and three-dimensional benchmark problem of incompressible flow in a lid-driven cavity at large Reynolds numbers. For these cases we show that solutions of comparable accuracy as the benchmarks are obtained with more than an order of magnitude reduction in degrees of freedom. The simulations show the striking ability of the algorithm to adapt to a solution having different scales at different spatial locations so as to produce accurate results at a relatively low computational cost.

  16. Combining constraint satisfaction and local improvement algorithms to construct anaesthetists' rotas

    NASA Technical Reports Server (NTRS)

    Smith, Barbara M.; Bennett, Sean

    1992-01-01

    A system is described which was built to compile weekly rotas for the anaesthetists in a large hospital. The rota compilation problem is an optimization problem (the number of tasks which cannot be assigned to an anaesthetist must be minimized) and was formulated as a constraint satisfaction problem (CSP). The forward checking algorithm is used to find a feasible rota, but because of the size of the problem, it cannot find an optimal (or even a good enough) solution in an acceptable time. Instead, an algorithm was devised which makes local improvements to a feasible solution. The algorithm makes use of the constraints as expressed in the CSP to ensure that feasibility is maintained, and produces very good rotas which are being used by the hospital involved in the project. It is argued that formulation as a constraint satisfaction problem may be a good approach to solving discrete optimization problems, even if the resulting CSP is too large to be solved exactly in an acceptable time. A CSP algorithm may be able to produce a feasible solution which can then be improved, giving a good, if not provably optimal, solution.

  17. Real-time robot deliberation by compilation and monitoring of anytime algorithms

    NASA Technical Reports Server (NTRS)

    Zilberstein, Shlomo

    1994-01-01

    Anytime algorithms are algorithms whose quality of results improves gradually as computation time increases. Certainty, accuracy, and specificity are metrics useful in anytime algorighm construction. It is widely accepted that a successful robotic system must trade off between decision quality and the computational resources used to produce it. Anytime algorithms were designed to offer such a trade off. A model of compilation and monitoring mechanisms needed to build robots that can efficiently control their deliberation time is presented. This approach simplifies the design and implementation of complex intelligent robots, mechanizes the composition and monitoring processes, and provides independent real time robotic systems that automatically adjust resource allocation to yield optimum performance.

  18. Validation of electronic medical record-based phenotyping algorithms: results and lessons learned from the eMERGE network.

    PubMed

    Newton, Katherine M; Peissig, Peggy L; Kho, Abel Ngo; Bielinski, Suzette J; Berg, Richard L; Choudhary, Vidhu; Basford, Melissa; Chute, Christopher G; Kullo, Iftikhar J; Li, Rongling; Pacheco, Jennifer A; Rasmussen, Luke V; Spangler, Leslie; Denny, Joshua C

    2013-06-01

    Genetic studies require precise phenotype definitions, but electronic medical record (EMR) phenotype data are recorded inconsistently and in a variety of formats. To present lessons learned about validation of EMR-based phenotypes from the Electronic Medical Records and Genomics (eMERGE) studies. The eMERGE network created and validated 13 EMR-derived phenotype algorithms. Network sites are Group Health, Marshfield Clinic, Mayo Clinic, Northwestern University, and Vanderbilt University. By validating EMR-derived phenotypes we learned that: (1) multisite validation improves phenotype algorithm accuracy; (2) targets for validation should be carefully considered and defined; (3) specifying time frames for review of variables eases validation time and improves accuracy; (4) using repeated measures requires defining the relevant time period and specifying the most meaningful value to be studied; (5) patient movement in and out of the health plan (transience) can result in incomplete or fragmented data; (6) the review scope should be defined carefully; (7) particular care is required in combining EMR and research data; (8) medication data can be assessed using claims, medications dispensed, or medications prescribed; (9) algorithm development and validation work best as an iterative process; and (10) validation by content experts or structured chart review can provide accurate results. Despite the diverse structure of the five EMRs of the eMERGE sites, we developed, validated, and successfully deployed 13 electronic phenotype algorithms. Validation is a worthwhile process that not only measures phenotype performance but also strengthens phenotype algorithm definitions and enhances their inter-institutional sharing.

  19. Finite-time stabilization of chaotic gyros based on a homogeneous supertwisting-like algorithm

    NASA Astrophysics Data System (ADS)

    Khamsuwan, Pitcha; Sangpet, Teerawat; Kuntanapreeda, Suwat

    2018-01-01

    This paper presents a finite-time stabilization scheme for nonlinear chaotic gyros. The scheme utilizes a supertwisting-like continuous control algorithm for the systems of dimension more than one with a Lipschitz disturbance. The algorithm yields finite-time convergence similar to that produces by discontinuous sliding mode control algorithms. To design the controller, the nonlinearities in the gyro are treated as a disturbance in the system. Thanks to the dissipativeness of chaotic systems, the nonlinearities also possess the Lipschitz property. Numerical results are provided to illustrate the effectiveness of the scheme.

  20. High-speed cell recognition algorithm for ultrafast flow cytometer imaging system.

    PubMed

    Zhao, Wanyue; Wang, Chao; Chen, Hongwei; Chen, Minghua; Yang, Sigang

    2018-04-01

    An optical time-stretch flow imaging system enables high-throughput examination of cells/particles with unprecedented high speed and resolution. A significant amount of raw image data is produced. A high-speed cell recognition algorithm is, therefore, highly demanded to analyze large amounts of data efficiently. A high-speed cell recognition algorithm consisting of two-stage cascaded detection and Gaussian mixture model (GMM) classification is proposed. The first stage of detection extracts cell regions. The second stage integrates distance transform and the watershed algorithm to separate clustered cells. Finally, the cells detected are classified by GMM. We compared the performance of our algorithm with support vector machine. Results show that our algorithm increases the running speed by over 150% without sacrificing the recognition accuracy. This algorithm provides a promising solution for high-throughput and automated cell imaging and classification in the ultrafast flow cytometer imaging platform. (2018) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE).

  1. Artificial immune algorithm for multi-depot vehicle scheduling problems

    NASA Astrophysics Data System (ADS)

    Wu, Zhongyi; Wang, Donggen; Xia, Linyuan; Chen, Xiaoling

    2008-10-01

    In the fast-developing logistics and supply chain management fields, one of the key problems in the decision support system is that how to arrange, for a lot of customers and suppliers, the supplier-to-customer assignment and produce a detailed supply schedule under a set of constraints. Solutions to the multi-depot vehicle scheduling problems (MDVRP) help in solving this problem in case of transportation applications. The objective of the MDVSP is to minimize the total distance covered by all vehicles, which can be considered as delivery costs or time consumption. The MDVSP is one of nondeterministic polynomial-time hard (NP-hard) problem which cannot be solved to optimality within polynomial bounded computational time. Many different approaches have been developed to tackle MDVSP, such as exact algorithm (EA), one-stage approach (OSA), two-phase heuristic method (TPHM), tabu search algorithm (TSA), genetic algorithm (GA) and hierarchical multiplex structure (HIMS). Most of the methods mentioned above are time consuming and have high risk to result in local optimum. In this paper, a new search algorithm is proposed to solve MDVSP based on Artificial Immune Systems (AIS), which are inspirited by vertebrate immune systems. The proposed AIS algorithm is tested with 30 customers and 6 vehicles located in 3 depots. Experimental results show that the artificial immune system algorithm is an effective and efficient method for solving MDVSP problems.

  2. Improved Passive Microwave Algorithms for North America and Eurasia

    NASA Technical Reports Server (NTRS)

    Foster, James; Chang, Alfred; Hall, Dorothy

    1997-01-01

    Microwave algorithms simplify complex physical processes in order to estimate geophysical parameters such as snow cover and snow depth. The microwave radiances received at the satellite sensor and expressed as brightness temperatures are a composite of contributions from the Earth's surface, the Earth's atmosphere and from space. Owing to the coarse resolution inherent to passive microwave sensors, each pixel value represents a mixture of contributions from different surface types including deep snow, shallow snow, forests and open areas. Algorithms are generated in order to resolve these mixtures. The accuracy of the retrieved information is affected by uncertainties in the assumptions used in the radiative transfer equation (Steffen et al., 1992). One such uncertainty in the Chang et al., (1987) snow algorithm is that the snow grain radius is 0.3 mm for all layers of the snowpack and for all physiographic regions. However, this is not usually the case. The influence of larger grain sizes appears to be of more importance for deeper snowpacks in the interior of Eurasia. Based on this consideration and the effects of forests, a revised SMMR snow algorithm produces more realistic snow mass values. The purpose of this study is to present results of the revised algorithm (referred to for the remainder of this paper as the GSFC 94 snow algorithm) which incorporates differences in both fractional forest cover and snow grain size. Results from the GSFC 94 algorithm will be compared to the original Chang et al. (1987) algorithm and to climatological snow depth data as well.

  3. A Compressed Sensing-based Image Reconstruction Algorithm for Solar Flare X-Ray Observations

    NASA Astrophysics Data System (ADS)

    Felix, Simon; Bolzern, Roman; Battaglia, Marina

    2017-11-01

    One way of imaging X-ray emission from solar flares is to measure Fourier components of the spatial X-ray source distribution. We present a new compressed sensing-based algorithm named VIS_CS, which reconstructs the spatial distribution from such Fourier components. We demonstrate the application of the algorithm on synthetic and observed solar flare X-ray data from the Reuven Ramaty High Energy Solar Spectroscopic Imager satellite and compare its performance with existing algorithms. VIS_CS produces competitive results with accurate photometry and morphology, without requiring any algorithm- and X-ray-source-specific parameter tuning. Its robustness and performance make this algorithm ideally suited for the generation of quicklook images or large image cubes without user intervention, such as for imaging spectroscopy analysis.

  4. Determining the bias and variance of a deterministic finger-tracking algorithm.

    PubMed

    Morash, Valerie S; van der Velden, Bas H M

    2016-06-01

    Finger tracking has the potential to expand haptic research and applications, as eye tracking has done in vision research. In research applications, it is desirable to know the bias and variance associated with a finger-tracking method. However, assessing the bias and variance of a deterministic method is not straightforward. Multiple measurements of the same finger position data will not produce different results, implying zero variance. Here, we present a method of assessing deterministic finger-tracking variance and bias through comparison to a non-deterministic measure. A proof-of-concept is presented using a video-based finger-tracking algorithm developed for the specific purpose of tracking participant fingers during a psychological research study. The algorithm uses ridge detection on videos of the participant's hand, and estimates the location of the right index fingertip. The algorithm was evaluated using data from four participants, who explored tactile maps using only their right index finger and all right-hand fingers. The algorithm identified the index fingertip in 99.78 % of one-finger video frames and 97.55 % of five-finger video frames. Although the algorithm produced slightly biased and more dispersed estimates relative to a human coder, these differences (x=0.08 cm, y=0.04 cm) and standard deviations (σ x =0.16 cm, σ y =0.21 cm) were small compared to the size of a fingertip (1.5-2.0 cm). Some example finger-tracking results are provided where corrections are made using the bias and variance estimates.

  5. SU-E-T-502: Initial Results of a Comparison of Treatment Plans Produced From Automated Prioritized Planning Method and a Commercial Treatment Planning System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tiwari, P; Chen, Y; Hong, L

    2015-06-15

    Purpose We developed an automated treatment planning system based on a hierarchical goal programming approach. To demonstrate the feasibility of our method, we report the comparison of prostate treatment plans produced from the automated treatment planning system with those produced by a commercial treatment planning system. Methods In our approach, we prioritized the goals of the optimization, and solved one goal at a time. The purpose of prioritization is to ensure that higher priority dose-volume planning goals are not sacrificed to improve lower priority goals. The algorithm has four steps. The first step optimizes dose to the target structures, whilemore » sparing key sensitive organs from radiation. In the second step, the algorithm finds the best beamlet weight to reduce toxicity risks to normal tissue while holding the objective function achieved in the first step as a constraint, with a small amount of allowed slip. Likewise, the third and fourth steps introduce lower priority normal tissue goals and beam smoothing. We compared with prostate treatment plans from Memorial Sloan Kettering Cancer Center developed using Eclipse, with a prescription dose of 72 Gy. A combination of liear, quadratic, and gEUD objective functions were used with a modified open source solver code (IPOPT). Results Initial plan results on 3 different cases show that the automated planning system is capable of competing or improving on expert-driven eclipse plans. Compared to the Eclipse planning system, the automated system produced up to 26% less mean dose to rectum and 24% less mean dose to bladder while having the same D95 (after matching) to the target. Conclusion We have demonstrated that Pareto optimal treatment plans can be generated automatically without a trial-and-error process. The solver finds an optimal plan for the given patient, as opposed to database-driven approaches that set parameters based on geometry and population modeling.« less

  6. The GRAPE aerosol retrieval algorithm

    NASA Astrophysics Data System (ADS)

    Thomas, G. E.; Poulsen, C. A.; Sayer, A. M.; Marsh, S. H.; Dean, S. M.; Carboni, E.; Siddans, R.; Grainger, R. G.; Lawrence, B. N.

    2009-11-01

    The aerosol component of the Oxford-Rutherford Aerosol and Cloud (ORAC) combined cloud and aerosol retrieval scheme is described and the theoretical performance of the algorithm is analysed. ORAC is an optimal estimation retrieval scheme for deriving cloud and aerosol properties from measurements made by imaging satellite radiometers and, when applied to cloud free radiances, provides estimates of aerosol optical depth at a wavelength of 550 nm, aerosol effective radius and surface reflectance at 550 nm. The aerosol retrieval component of ORAC has several incarnations - this paper addresses the version which operates in conjunction with the cloud retrieval component of ORAC (described by Watts et al., 1998), as applied in producing the Global Retrieval of ATSR Cloud Parameters and Evaluation (GRAPE) data-set. The algorithm is described in detail and its performance examined. This includes a discussion of errors resulting from the formulation of the forward model, sensitivity of the retrieval to the measurements and a priori constraints, and errors resulting from assumptions made about the atmospheric/surface state.

  7. The GRAPE aerosol retrieval algorithm

    NASA Astrophysics Data System (ADS)

    Thomas, G. E.; Poulsen, C. A.; Sayer, A. M.; Marsh, S. H.; Dean, S. M.; Carboni, E.; Siddans, R.; Grainger, R. G.; Lawrence, B. N.

    2009-04-01

    The aerosol component of the Oxford-Rutherford Aerosol and Cloud (ORAC) combined cloud and aerosol retrieval scheme is described and the theoretical performance of the algorithm is analysed. ORAC is an optimal estimation retrieval scheme for deriving cloud and aerosol properties from measurements made by imaging satellite radiometers and, when applied to cloud free radiances, provides estimates of aerosol optical depth at a wavelength of 550 nm, aerosol effective radius and surface reflectance at 550 nm. The aerosol retrieval component of ORAC has several incarnations - this paper addresses the version which operates in conjunction with the cloud retrieval component of ORAC (described by Watts et al., 1998), as applied in producing the Global Retrieval of ATSR Cloud Parameters and Evaluation (GRAPE) data-set. The algorithm is described in detail and its performance examined. This includes a discussion of errors resulting from the formulation of the forward model, sensitivity of the retrieval to the measurements and a priori constraints, and errors resulting from assumptions made about the atmospheric/surface state.

  8. The Langley Parameterized Shortwave Algorithm (LPSA) for Surface Radiation Budget Studies. 1.0

    NASA Technical Reports Server (NTRS)

    Gupta, Shashi K.; Kratz, David P.; Stackhouse, Paul W., Jr.; Wilber, Anne C.

    2001-01-01

    An efficient algorithm was developed during the late 1980's and early 1990's by W. F. Staylor at NASA/LaRC for the purpose of deriving shortwave surface radiation budget parameters on a global scale. While the algorithm produced results in good agreement with observations, the lack of proper documentation resulted in a weak acceptance by the science community. The primary purpose of this report is to develop detailed documentation of the algorithm. In the process, the algorithm was modified whenever discrepancies were found between the algorithm and its referenced literature sources. In some instances, assumptions made in the algorithm could not be justified and were replaced with those that were justifiable. The algorithm uses satellite and operational meteorological data for inputs. Most of the original data sources have been replaced by more recent, higher quality data sources, and fluxes are now computed on a higher spatial resolution. Many more changes to the basic radiation scheme and meteorological inputs have been proposed to improve the algorithm and make the product more useful for new research projects. Because of the many changes already in place and more planned for the future, the algorithm has been renamed the Langley Parameterized Shortwave Algorithm (LPSA).

  9. Anomaly detection in hyperspectral imagery: statistics vs. graph-based algorithms

    NASA Astrophysics Data System (ADS)

    Berkson, Emily E.; Messinger, David W.

    2016-05-01

    Anomaly detection (AD) algorithms are frequently applied to hyperspectral imagery, but different algorithms produce different outlier results depending on the image scene content and the assumed background model. This work provides the first comparison of anomaly score distributions between common statistics-based anomaly detection algorithms (RX and subspace-RX) and the graph-based Topological Anomaly Detector (TAD). Anomaly scores in statistical AD algorithms should theoretically approximate a chi-squared distribution; however, this is rarely the case with real hyperspectral imagery. The expected distribution of scores found with graph-based methods remains unclear. We also look for general trends in algorithm performance with varied scene content. Three separate scenes were extracted from the hyperspectral MegaScene image taken over downtown Rochester, NY with the VIS-NIR-SWIR ProSpecTIR instrument. In order of most to least cluttered, we study an urban, suburban, and rural scene. The three AD algorithms were applied to each scene, and the distributions of the most anomalous 5% of pixels were compared. We find that subspace-RX performs better than RX, because the data becomes more normal when the highest variance principal components are removed. We also see that compared to statistical detectors, anomalies detected by TAD are easier to separate from the background. Due to their different underlying assumptions, the statistical and graph-based algorithms highlighted different anomalies within the urban scene. These results will lead to a deeper understanding of these algorithms and their applicability across different types of imagery.

  10. Scene-based nonuniformity correction with reduced ghosting using a gated LMS algorithm.

    PubMed

    Hardie, Russell C; Baxley, Frank; Brys, Brandon; Hytla, Patrick

    2009-08-17

    In this paper, we present a scene-based nouniformity correction (NUC) method using a modified adaptive least mean square (LMS) algorithm with a novel gating operation on the updates. The gating is designed to significantly reduce ghosting artifacts produced by many scene-based NUC algorithms by halting updates when temporal variation is lacking. We define the algorithm and present a number of experimental results to demonstrate the efficacy of the proposed method in comparison to several previously published methods including other LMS and constant statistics based methods. The experimental results include simulated imagery and a real infrared image sequence. We show that the proposed method significantly reduces ghosting artifacts, but has a slightly longer convergence time. (c) 2009 Optical Society of America

  11. A sustainable genetic algorithm for satellite resource allocation

    NASA Technical Reports Server (NTRS)

    Abbott, R. J.; Campbell, M. L.; Krenz, W. C.

    1995-01-01

    A hybrid genetic algorithm is used to schedule tasks for 8 satellites, which can be modelled as a robot whose task is to retrieve objects from a two dimensional field. The objective is to find a schedule that maximizes the value of objects retrieved. Typical of the real-world tasks to which this corresponds is the scheduling of ground contacts for a communications satellite. An important feature of our application is that the amount of time available for running the scheduler is not necessarily known in advance. This requires that the scheduler produce reasonably good results after a short period but that it also continue to improve its results if allowed to run for a longer period. We satisfy this requirement by developing what we call a sustainable genetic algorithm.

  12. Mean field analysis of algorithms for scale-free networks in molecular biology

    PubMed Central

    2017-01-01

    The sampling of scale-free networks in Molecular Biology is usually achieved by growing networks from a seed using recursive algorithms with elementary moves which include the addition and deletion of nodes and bonds. These algorithms include the Barabási-Albert algorithm. Later algorithms, such as the Duplication-Divergence algorithm, the Solé algorithm and the iSite algorithm, were inspired by biological processes underlying the evolution of protein networks, and the networks they produce differ essentially from networks grown by the Barabási-Albert algorithm. In this paper the mean field analysis of these algorithms is reconsidered, and extended to variant and modified implementations of the algorithms. The degree sequences of scale-free networks decay according to a powerlaw distribution, namely P(k) ∼ k−γ, where γ is a scaling exponent. We derive mean field expressions for γ, and test these by numerical simulations. Generally, good agreement is obtained. We also found that some algorithms do not produce scale-free networks (for example some variant Barabási-Albert and Solé networks). PMID:29272285

  13. Mean field analysis of algorithms for scale-free networks in molecular biology.

    PubMed

    Konini, S; Janse van Rensburg, E J

    2017-01-01

    The sampling of scale-free networks in Molecular Biology is usually achieved by growing networks from a seed using recursive algorithms with elementary moves which include the addition and deletion of nodes and bonds. These algorithms include the Barabási-Albert algorithm. Later algorithms, such as the Duplication-Divergence algorithm, the Solé algorithm and the iSite algorithm, were inspired by biological processes underlying the evolution of protein networks, and the networks they produce differ essentially from networks grown by the Barabási-Albert algorithm. In this paper the mean field analysis of these algorithms is reconsidered, and extended to variant and modified implementations of the algorithms. The degree sequences of scale-free networks decay according to a powerlaw distribution, namely P(k) ∼ k-γ, where γ is a scaling exponent. We derive mean field expressions for γ, and test these by numerical simulations. Generally, good agreement is obtained. We also found that some algorithms do not produce scale-free networks (for example some variant Barabási-Albert and Solé networks).

  14. An efficient algorithm for function optimization: modified stem cells algorithm

    NASA Astrophysics Data System (ADS)

    Taherdangkoo, Mohammad; Paziresh, Mahsa; Yazdi, Mehran; Bagheri, Mohammad Hadi

    2013-03-01

    In this paper, we propose an optimization algorithm based on the intelligent behavior of stem cell swarms in reproduction and self-organization. Optimization algorithms, such as the Genetic Algorithm (GA), Particle Swarm Optimization (PSO) algorithm, Ant Colony Optimization (ACO) algorithm and Artificial Bee Colony (ABC) algorithm, can give solutions to linear and non-linear problems near to the optimum for many applications; however, in some case, they can suffer from becoming trapped in local optima. The Stem Cells Algorithm (SCA) is an optimization algorithm inspired by the natural behavior of stem cells in evolving themselves into new and improved cells. The SCA avoids the local optima problem successfully. In this paper, we have made small changes in the implementation of this algorithm to obtain improved performance over previous versions. Using a series of benchmark functions, we assess the performance of the proposed algorithm and compare it with that of the other aforementioned optimization algorithms. The obtained results prove the superiority of the Modified Stem Cells Algorithm (MSCA).

  15. A greedy algorithm for species selection in dimension reduction of combustion chemistry

    NASA Astrophysics Data System (ADS)

    Hiremath, Varun; Ren, Zhuyin; Pope, Stephen B.

    2010-09-01

    Computational calculations of combustion problems involving large numbers of species and reactions with a detailed description of the chemistry can be very expensive. Numerous dimension reduction techniques have been developed in the past to reduce the computational cost. In this paper, we consider the rate controlled constrained-equilibrium (RCCE) dimension reduction method, in which a set of constrained species is specified. For a given number of constrained species, the 'optimal' set of constrained species is that which minimizes the dimension reduction error. The direct determination of the optimal set is computationally infeasible, and instead we present a greedy algorithm which aims at determining a 'good' set of constrained species; that is, one leading to near-minimal dimension reduction error. The partially-stirred reactor (PaSR) involving methane premixed combustion with chemistry described by the GRI-Mech 1.2 mechanism containing 31 species is used to test the algorithm. Results on dimension reduction errors for different sets of constrained species are presented to assess the effectiveness of the greedy algorithm. It is shown that the first four constrained species selected using the proposed greedy algorithm produce lower dimension reduction error than constraints on the major species: CH4, O2, CO2 and H2O. It is also shown that the first ten constrained species selected using the proposed greedy algorithm produce a non-increasing dimension reduction error with every additional constrained species; and produce the lowest dimension reduction error in many cases tested over a wide range of equivalence ratios, pressures and initial temperatures.

  16. Unsupervised Cryo-EM Data Clustering through Adaptively Constrained K-Means Algorithm.

    PubMed

    Xu, Yaofang; Wu, Jiayi; Yin, Chang-Cheng; Mao, Youdong

    2016-01-01

    In single-particle cryo-electron microscopy (cryo-EM), K-means clustering algorithm is widely used in unsupervised 2D classification of projection images of biological macromolecules. 3D ab initio reconstruction requires accurate unsupervised classification in order to separate molecular projections of distinct orientations. Due to background noise in single-particle images and uncertainty of molecular orientations, traditional K-means clustering algorithm may classify images into wrong classes and produce classes with a large variation in membership. Overcoming these limitations requires further development on clustering algorithms for cryo-EM data analysis. We propose a novel unsupervised data clustering method building upon the traditional K-means algorithm. By introducing an adaptive constraint term in the objective function, our algorithm not only avoids a large variation in class sizes but also produces more accurate data clustering. Applications of this approach to both simulated and experimental cryo-EM data demonstrate that our algorithm is a significantly improved alterative to the traditional K-means algorithm in single-particle cryo-EM analysis.

  17. Comparison of sorting algorithms to increase the range of Hartmann-Shack aberrometry

    NASA Astrophysics Data System (ADS)

    Bedggood, Phillip; Metha, Andrew

    2010-11-01

    Recently many software-based approaches have been suggested for improving the range and accuracy of Hartmann-Shack aberrometry. We compare the performance of four representative algorithms, with a focus on aberrometry for the human eye. Algorithms vary in complexity from the simplistic traditional approach to iterative spline extrapolation based on prior spot measurements. Range is assessed for a variety of aberration types in isolation using computer modeling, and also for complex wavefront shapes using a real adaptive optics system. The effects of common sources of error for ocular wavefront sensing are explored. The results show that the simplest possible iterative algorithm produces comparable range and robustness compared to the more complicated algorithms, while keeping processing time minimal to afford real-time analysis.

  18. Comparison of sorting algorithms to increase the range of Hartmann-Shack aberrometry.

    PubMed

    Bedggood, Phillip; Metha, Andrew

    2010-01-01

    Recently many software-based approaches have been suggested for improving the range and accuracy of Hartmann-Shack aberrometry. We compare the performance of four representative algorithms, with a focus on aberrometry for the human eye. Algorithms vary in complexity from the simplistic traditional approach to iterative spline extrapolation based on prior spot measurements. Range is assessed for a variety of aberration types in isolation using computer modeling, and also for complex wavefront shapes using a real adaptive optics system. The effects of common sources of error for ocular wavefront sensing are explored. The results show that the simplest possible iterative algorithm produces comparable range and robustness compared to the more complicated algorithms, while keeping processing time minimal to afford real-time analysis.

  19. Estimation of Contextual Effects through Nonlinear Multilevel Latent Variable Modeling with a Metropolis-Hastings Robbins-Monro Algorithm

    ERIC Educational Resources Information Center

    Yang, Ji Seung; Cai, Li

    2014-01-01

    The main purpose of this study is to improve estimation efficiency in obtaining maximum marginal likelihood estimates of contextual effects in the framework of nonlinear multilevel latent variable model by adopting the Metropolis-Hastings Robbins-Monro algorithm (MH-RM). Results indicate that the MH-RM algorithm can produce estimates and standard…

  20. An Improved Clustering Algorithm of Tunnel Monitoring Data for Cloud Computing

    PubMed Central

    Zhong, Luo; Tang, KunHao; Li, Lin; Yang, Guang; Ye, JingJing

    2014-01-01

    With the rapid development of urban construction, the number of urban tunnels is increasing and the data they produce become more and more complex. It results in the fact that the traditional clustering algorithm cannot handle the mass data of the tunnel. To solve this problem, an improved parallel clustering algorithm based on k-means has been proposed. It is a clustering algorithm using the MapReduce within cloud computing that deals with data. It not only has the advantage of being used to deal with mass data but also is more efficient. Moreover, it is able to compute the average dissimilarity degree of each cluster in order to clean the abnormal data. PMID:24982971

  1. Volume reconstruction optimization for tomo-PIV algorithms applied to experimental data

    NASA Astrophysics Data System (ADS)

    Martins, Fabio J. W. A.; Foucaut, Jean-Marc; Thomas, Lionel; Azevedo, Luis F. A.; Stanislas, Michel

    2015-08-01

    Tomographic PIV is a three-component volumetric velocity measurement technique based on the tomographic reconstruction of a particle distribution imaged by multiple camera views. In essence, the performance and accuracy of this technique is highly dependent on the parametric adjustment and the reconstruction algorithm used. Although synthetic data have been widely employed to optimize experiments, the resulting reconstructed volumes might not have optimal quality. The purpose of the present study is to offer quality indicators that can be applied to data samples in order to improve the quality of velocity results obtained by the tomo-PIV technique. The methodology proposed can potentially lead to significantly reduction in the time required to optimize a tomo-PIV reconstruction, also leading to better quality velocity results. Tomo-PIV data provided by a six-camera turbulent boundary-layer experiment were used to optimize the reconstruction algorithms according to this methodology. Velocity statistics measurements obtained by optimized BIMART, SMART and MART algorithms were compared with hot-wire anemometer data and velocity measurement uncertainties were computed. Results indicated that BIMART and SMART algorithms produced reconstructed volumes with equivalent quality as the standard MART with the benefit of reduced computational time.

  2. A Compressed Sensing-based Image Reconstruction Algorithm for Solar Flare X-Ray Observations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Felix, Simon; Bolzern, Roman; Battaglia, Marina, E-mail: simon.felix@fhnw.ch, E-mail: roman.bolzern@fhnw.ch, E-mail: marina.battaglia@fhnw.ch

    One way of imaging X-ray emission from solar flares is to measure Fourier components of the spatial X-ray source distribution. We present a new compressed sensing-based algorithm named VIS-CS, which reconstructs the spatial distribution from such Fourier components. We demonstrate the application of the algorithm on synthetic and observed solar flare X-ray data from the Reuven Ramaty High Energy Solar Spectroscopic Imager satellite and compare its performance with existing algorithms. VIS-CS produces competitive results with accurate photometry and morphology, without requiring any algorithm- and X-ray-source-specific parameter tuning. Its robustness and performance make this algorithm ideally suited for the generation ofmore » quicklook images or large image cubes without user intervention, such as for imaging spectroscopy analysis.« less

  3. Stochastic Formal Correctness of Numerical Algorithms

    NASA Technical Reports Server (NTRS)

    Daumas, Marc; Lester, David; Martin-Dorel, Erik; Truffert, Annick

    2009-01-01

    We provide a framework to bound the probability that accumulated errors were never above a given threshold on numerical algorithms. Such algorithms are used for example in aircraft and nuclear power plants. This report contains simple formulas based on Levy's and Markov's inequalities and it presents a formal theory of random variables with a special focus on producing concrete results. We selected four very common applications that fit in our framework and cover the common practices of systems that evolve for a long time. We compute the number of bits that remain continuously significant in the first two applications with a probability of failure around one out of a billion, where worst case analysis considers that no significant bit remains. We are using PVS as such formal tools force explicit statement of all hypotheses and prevent incorrect uses of theorems.

  4. Comparison of Point Cloud Registration Algorithms for Better Result Assessment - Towards AN Open-Source Solution

    NASA Astrophysics Data System (ADS)

    Lachat, E.; Landes, T.; Grussenmeyer, P.

    2018-05-01

    Terrestrial and airborne laser scanning, photogrammetry and more generally 3D recording techniques are used in a wide range of applications. After recording several individual 3D datasets known in local systems, one of the first crucial processing steps is the registration of these data into a common reference frame. To perform such a 3D transformation, commercial and open source software as well as programs from the academic community are available. Due to some lacks in terms of computation transparency and quality assessment in these solutions, it has been decided to develop an open source algorithm which is presented in this paper. It is dedicated to the simultaneous registration of multiple point clouds as well as their georeferencing. The idea is to use this algorithm as a start point for further implementations, involving the possibility of combining 3D data from different sources. Parallel to the presentation of the global registration methodology which has been employed, the aim of this paper is to confront the results achieved this way with the above-mentioned existing solutions. For this purpose, first results obtained with the proposed algorithm to perform the global registration of ten laser scanning point clouds are presented. An analysis of the quality criteria delivered by two selected software used in this study and a reflexion about these criteria is also performed to complete the comparison of the obtained results. The final aim of this paper is to validate the current efficiency of the proposed method through these comparisons.

  5. Discrete size optimization of steel trusses using a refined big bang-big crunch algorithm

    NASA Astrophysics Data System (ADS)

    Hasançebi, O.; Kazemzadeh Azad, S.

    2014-01-01

    This article presents a methodology that provides a method for design optimization of steel truss structures based on a refined big bang-big crunch (BB-BC) algorithm. It is shown that a standard formulation of the BB-BC algorithm occasionally falls short of producing acceptable solutions to problems from discrete size optimum design of steel trusses. A reformulation of the algorithm is proposed and implemented for design optimization of various discrete truss structures according to American Institute of Steel Construction Allowable Stress Design (AISC-ASD) specifications. Furthermore, the performance of the proposed BB-BC algorithm is compared to its standard version as well as other well-known metaheuristic techniques. The numerical results confirm the efficiency of the proposed algorithm in practical design optimization of truss structures.

  6. Domain Decomposition Algorithms for First-Order System Least Squares Methods

    NASA Technical Reports Server (NTRS)

    Pavarino, Luca F.

    1996-01-01

    Least squares methods based on first-order systems have been recently proposed and analyzed for second-order elliptic equations and systems. They produce symmetric and positive definite discrete systems by using standard finite element spaces, which are not required to satisfy the inf-sup condition. In this paper, several domain decomposition algorithms for these first-order least squares methods are studied. Some representative overlapping and substructuring algorithms are considered in their additive and multiplicative variants. The theoretical and numerical results obtained show that the classical convergence bounds (on the iteration operator) for standard Galerkin discretizations are also valid for least squares methods.

  7. Genetic Algorithm Calibration of Probabilistic Cellular Automata for Modeling Mining Permit Activity

    USGS Publications Warehouse

    Louis, S.J.; Raines, G.L.

    2003-01-01

    We use a genetic algorithm to calibrate a spatially and temporally resolved cellular automata to model mining activity on public land in Idaho and western Montana. The genetic algorithm searches through a space of transition rule parameters of a two dimensional cellular automata model to find rule parameters that fit observed mining activity data. Previous work by one of the authors in calibrating the cellular automaton took weeks - the genetic algorithm takes a day and produces rules leading to about the same (or better) fit to observed data. These preliminary results indicate that genetic algorithms are a viable tool in calibrating cellular automata for this application. Experience gained during the calibration of this cellular automata suggests that mineral resource information is a critical factor in the quality of the results. With automated calibration, further refinements of how the mineral-resource information is provided to the cellular automaton will probably improve our model.

  8. ROBIN: a platform for evaluating automatic target recognition algorithms: II. Protocols used for evaluating algorithms and results obtained on the SAGEM DS database

    NASA Astrophysics Data System (ADS)

    Duclos, D.; Lonnoy, J.; Guillerm, Q.; Jurie, F.; Herbin, S.; D'Angelo, E.

    2008-04-01

    Over the five past years, the computer vision community has explored many different avenues of research for Automatic Target Recognition. Noticeable advances have been made and we are now in the situation where large-scale evaluations of ATR technologies have to be carried out, to determine what the limitations of the recently proposed methods are and to determine the best directions for future works. ROBIN, which is a project funded by the French Ministry of Defence and by the French Ministry of Research, has the ambition of being a new reference for benchmarking ATR algorithms in operational contexts. This project, headed by major companies and research centers involved in Computer Vision R&D in the field of Defense (Bertin Technologies, CNES, ECA, DGA, EADS, INRIA, ONERA, MBDA, SAGEM, THALES) recently released a large dataset of several thousands of hand-annotated infrared and RGB images of different targets in different situations. Setting up an evaluation campaign requires us to define, accurately and carefully, sets of data (both for training ATR algorithms and for their evaluation), tasks to be evaluated, and finally protocols and metrics for the evaluation. ROBIN offers interesting contributions to each one of these three points. This paper first describes, justifies and defines the set of functions used in the ROBIN competitions and relevant for evaluating ATR algorithms (Detection, Localization, Recognition and Identification). It also defines the metrics and the protocol used for evaluating these functions. In the second part of the paper, the results obtained by several state-of-the-art algorithms on the SAGEM DS database (a subpart of ROBIN) are presented and discussed

  9. Solar Energetic Particle Forecasting Algorithms and Associated False Alarms

    NASA Astrophysics Data System (ADS)

    Swalwell, B.; Dalla, S.; Walsh, R. W.

    2017-11-01

    Solar energetic particle (SEP) events are known to occur following solar flares and coronal mass ejections (CMEs). However, some high-energy solar events do not result in SEPs being detected at Earth, and it is these types of event which may be termed "false alarms". We define two simple SEP forecasting algorithms based upon the occurrence of a magnetically well-connected CME with a speed in excess of 1500 km s^{-1} (a "fast" CME) or a well-connected X-class flare and analyse them with respect to historical datasets. We compare the parameters of those solar events which produced an enhancement of {>} 40 MeV protons at Earth (an "SEP event") and the parameters of false alarms. We find that an SEP forecasting algorithm based solely upon the occurrence of a well-connected fast CME produces fewer false alarms (28.8%) than an algorithm which is based solely upon a well-connected X-class flare (50.6%). Both algorithms fail to forecast a relatively high percentage of SEP events (53.2% and 50.6%, respectively). Our analysis of the historical datasets shows that false-alarm X-class flares were either not associated with any CME, or were associated with a CME slower than 500 km s^{-1}; false-alarm fast CMEs tended to be associated with flare classes lower than M3. A better approach to forecasting would be an algorithm which takes as its base the occurrence of both CMEs and flares. We define a new forecasting algorithm which uses a combination of CME and flare parameters, and we show that the false-alarm ratio is similar to that for the algorithm based upon fast CMEs (29.6%), but the percentage of SEP events not forecast is reduced to 32.4%. Lists of the solar events which gave rise to {>} 40 MeV protons and the false alarms have been derived and are made available to aid further study.

  10. Denni Algorithm An Enhanced Of SMS (Scan, Move and Sort) Algorithm

    NASA Astrophysics Data System (ADS)

    Aprilsyah Lubis, Denni; Salim Sitompul, Opim; Marwan; Tulus; Andri Budiman, M.

    2017-12-01

    Sorting has been a profound area for the algorithmic researchers, and many resources are invested to suggest a more working sorting algorithm. For this purpose many existing sorting algorithms were observed in terms of the efficiency of the algorithmic complexity. Efficient sorting is important to optimize the use of other algorithms that require sorted lists to work correctly. Sorting has been considered as a fundamental problem in the study of algorithms that due to many reasons namely, the necessary to sort information is inherent in many applications, algorithms often use sorting as a key subroutine, in algorithm design there are many essential techniques represented in the body of sorting algorithms, and many engineering issues come to the fore when implementing sorting algorithms., Many algorithms are very well known for sorting the unordered lists, and one of the well-known algorithms that make the process of sorting to be more economical and efficient is SMS (Scan, Move and Sort) algorithm, an enhancement of Quicksort invented Rami Mansi in 2010. This paper presents a new sorting algorithm called Denni-algorithm. The Denni algorithm is considered as an enhancement on the SMS algorithm in average, and worst cases. The Denni algorithm is compared with the SMS algorithm and the results were promising.

  11. Why all randomised controlled trials produce biased results.

    PubMed

    Krauss, Alexander

    2018-06-01

    Randomised controlled trials (RCTs) are commonly viewed as the best research method to inform public health and social policy. Usually they are thought of as providing the most rigorous evidence of a treatment's effectiveness without strong assumptions, biases and limitations. This is the first study to examine that hypothesis by assessing the 10 most cited RCT studies worldwide. These 10 RCT studies with the highest number of citations in any journal (up to June 2016) were identified by searching Scopus (the largest database of peer-reviewed journals). This study shows that these world-leading RCTs that have influenced policy produce biased results by illustrating that participants' background traits that affect outcomes are often poorly distributed between trial groups, that the trials often neglect alternative factors contributing to their main reported outcome and, among many other issues, that the trials are often only partially blinded or unblinded. The study here also identifies a number of novel and important assumptions, biases and limitations not yet thoroughly discussed in existing studies that arise when designing, implementing and analysing trials. Researchers and policymakers need to become better aware of the broader set of assumptions, biases and limitations in trials. Journals need to also begin requiring researchers to outline them in their studies. We need to furthermore better use RCTs together with other research methods. Key messages RCTs face a range of strong assumptions, biases and limitations that have not yet all been thoroughly discussed in the literature. This study assesses the 10 most cited RCTs worldwide and shows that trials inevitably produce bias. Trials involve complex processes - from randomising, blinding and controlling, to implementing treatments, monitoring participants etc. - that require many decisions and steps at different levels that bring their own assumptions and degree of bias to results.

  12. Efficient iterative image reconstruction algorithm for dedicated breast CT

    NASA Astrophysics Data System (ADS)

    Antropova, Natalia; Sanchez, Adrian; Reiser, Ingrid S.; Sidky, Emil Y.; Boone, John; Pan, Xiaochuan

    2016-03-01

    Dedicated breast computed tomography (bCT) is currently being studied as a potential screening method for breast cancer. The X-ray exposure is set low to achieve an average glandular dose comparable to that of mammography, yielding projection data that contains high levels of noise. Iterative image reconstruction (IIR) algorithms may be well-suited for the system since they potentially reduce the effects of noise in the reconstructed images. However, IIR outcomes can be difficult to control since the algorithm parameters do not directly correspond to the image properties. Also, IIR algorithms are computationally demanding and have optimal parameter settings that depend on the size and shape of the breast and positioning of the patient. In this work, we design an efficient IIR algorithm with meaningful parameter specifications and that can be used on a large, diverse sample of bCT cases. The flexibility and efficiency of this method comes from having the final image produced by a linear combination of two separately reconstructed images - one containing gray level information and the other with enhanced high frequency components. Both of the images result from few iterations of separate IIR algorithms. The proposed algorithm depends on two parameters both of which have a well-defined impact on image quality. The algorithm is applied to numerous bCT cases from a dedicated bCT prototype system developed at University of California, Davis.

  13. Line-drawing algorithms for parallel machines

    NASA Technical Reports Server (NTRS)

    Pang, Alex T.

    1990-01-01

    The fact that conventional line-drawing algorithms, when applied directly on parallel machines, can lead to very inefficient codes is addressed. It is suggested that instead of modifying an existing algorithm for a parallel machine, a more efficient implementation can be produced by going back to the invariants in the definition. Popular line-drawing algorithms are compared with two alternatives; distance to a line (a point is on the line if sufficiently close to it) and intersection with a line (a point on the line if an intersection point). For massively parallel single-instruction-multiple-data (SIMD) machines (with thousands of processors and up), the alternatives provide viable line-drawing algorithms. Because of the pixel-per-processor mapping, their performance is independent of the line length and orientation.

  14. Unsupervised Cryo-EM Data Clustering through Adaptively Constrained K-Means Algorithm

    PubMed Central

    Xu, Yaofang; Wu, Jiayi; Yin, Chang-Cheng; Mao, Youdong

    2016-01-01

    In single-particle cryo-electron microscopy (cryo-EM), K-means clustering algorithm is widely used in unsupervised 2D classification of projection images of biological macromolecules. 3D ab initio reconstruction requires accurate unsupervised classification in order to separate molecular projections of distinct orientations. Due to background noise in single-particle images and uncertainty of molecular orientations, traditional K-means clustering algorithm may classify images into wrong classes and produce classes with a large variation in membership. Overcoming these limitations requires further development on clustering algorithms for cryo-EM data analysis. We propose a novel unsupervised data clustering method building upon the traditional K-means algorithm. By introducing an adaptive constraint term in the objective function, our algorithm not only avoids a large variation in class sizes but also produces more accurate data clustering. Applications of this approach to both simulated and experimental cryo-EM data demonstrate that our algorithm is a significantly improved alterative to the traditional K-means algorithm in single-particle cryo-EM analysis. PMID:27959895

  15. Optimum Actuator Selection with a Genetic Algorithm for Aircraft Control

    NASA Technical Reports Server (NTRS)

    Rogers, James L.

    2004-01-01

    The placement of actuators on a wing determines the control effectiveness of the airplane. One approach to placement maximizes the moments about the pitch, roll, and yaw axes, while minimizing the coupling. For example, the desired actuators produce a pure roll moment without at the same time causing much pitch or yaw. For a typical wing, there is a large set of candidate locations for placing actuators, resulting in a substantially larger number of combinations to examine in order to find an optimum placement satisfying the mission requirements and mission constraints. A genetic algorithm has been developed for finding the best placement for four actuators to produce an uncoupled pitch moment. The genetic algorithm has been extended to find the minimum number of actuators required to provide uncoupled pitch, roll, and yaw control. A simplified, untapered, unswept wing is the model for each application.

  16. Level-set-based reconstruction algorithm for EIT lung images: first clinical results.

    PubMed

    Rahmati, Peyman; Soleimani, Manuchehr; Pulletz, Sven; Frerichs, Inéz; Adler, Andy

    2012-05-01

    We show the first clinical results using the level-set-based reconstruction algorithm for electrical impedance tomography (EIT) data. The level-set-based reconstruction method (LSRM) allows the reconstruction of non-smooth interfaces between image regions, which are typically smoothed by traditional voxel-based reconstruction methods (VBRMs). We develop a time difference formulation of the LSRM for 2D images. The proposed reconstruction method is applied to reconstruct clinical EIT data of a slow flow inflation pressure-volume manoeuvre in lung-healthy and adult lung-injury patients. Images from the LSRM and the VBRM are compared. The results show comparable reconstructed images, but with an improved ability to reconstruct sharp conductivity changes in the distribution of lung ventilation using the LSRM.

  17. Generation of Referring Expressions: Assessing the Incremental Algorithm

    ERIC Educational Resources Information Center

    van Deemter, Kees; Gatt, Albert; van der Sluis, Ielka; Power, Richard

    2012-01-01

    A substantial amount of recent work in natural language generation has focused on the generation of "one-shot" referring expressions whose only aim is to identify a target referent. Dale and Reiter's Incremental Algorithm (IA) is often thought to be the best algorithm for maximizing the similarity to referring expressions produced by people. We…

  18. New human-centered linear and nonlinear motion cueing algorithms for control of simulator motion systems

    NASA Astrophysics Data System (ADS)

    Telban, Robert J.

    While the performance of flight simulator motion system hardware has advanced substantially, the development of the motion cueing algorithm, the software that transforms simulated aircraft dynamics into realizable motion commands, has not kept pace. To address this, new human-centered motion cueing algorithms were developed. A revised "optimal algorithm" uses time-invariant filters developed by optimal control, incorporating human vestibular system models. The "nonlinear algorithm" is a novel approach that is also formulated by optimal control, but can also be updated in real time. It incorporates a new integrated visual-vestibular perception model that includes both visual and vestibular sensation and the interaction between the stimuli. A time-varying control law requires the matrix Riccati equation to be solved in real time by a neurocomputing approach. Preliminary pilot testing resulted in the optimal algorithm incorporating a new otolith model, producing improved motion cues. The nonlinear algorithm vertical mode produced a motion cue with a time-varying washout, sustaining small cues for longer durations and washing out large cues more quickly compared to the optimal algorithm. The inclusion of the integrated perception model improved the responses to longitudinal and lateral cues. False cues observed with the NASA adaptive algorithm were absent. As a result of unsatisfactory sensation, an augmented turbulence cue was added to the vertical mode for both the optimal and nonlinear algorithms. The relative effectiveness of the algorithms, in simulating aircraft maneuvers, was assessed with an eleven-subject piloted performance test conducted on the NASA Langley Visual Motion Simulator (VMS). Two methods, the quasi-objective NASA Task Load Index (TLX), and power spectral density analysis of pilot control, were used to assess pilot workload. TLX analysis reveals, in most cases, less workload and variation among pilots with the nonlinear algorithm. Control input

  19. Hybrid Cryptosystem Using Tiny Encryption Algorithm and LUC Algorithm

    NASA Astrophysics Data System (ADS)

    Rachmawati, Dian; Sharif, Amer; Jaysilen; Andri Budiman, Mohammad

    2018-01-01

    Security becomes a very important issue in data transmission and there are so many methods to make files more secure. One of that method is cryptography. Cryptography is a method to secure file by writing the hidden code to cover the original file. Therefore, if the people do not involve in cryptography, they cannot decrypt the hidden code to read the original file. There are many methods are used in cryptography, one of that method is hybrid cryptosystem. A hybrid cryptosystem is a method that uses a symmetric algorithm to secure the file and use an asymmetric algorithm to secure the symmetric algorithm key. In this research, TEA algorithm is used as symmetric algorithm and LUC algorithm is used as an asymmetric algorithm. The system is tested by encrypting and decrypting the file by using TEA algorithm and using LUC algorithm to encrypt and decrypt the TEA key. The result of this research is by using TEA Algorithm to encrypt the file, the cipher text form is the character from ASCII (American Standard for Information Interchange) table in the form of hexadecimal numbers and the cipher text size increase by sixteen bytes as the plaintext length is increased by eight characters.

  20. An effective hair detection algorithm for dermoscopic melanoma images of skin lesions

    NASA Astrophysics Data System (ADS)

    Chakraborti, Damayanti; Kaur, Ravneet; Umbaugh, Scott; LeAnder, Robert

    2016-09-01

    Dermoscopic images are obtained using the method of skin surface microscopy. Pigmented skin lesions are evaluated in terms of texture features such as color and structure. Artifacts, such as hairs, bubbles, black frames, ruler-marks, etc., create obstacles that prevent accurate detection of skin lesions by both clinicians and computer-aided diagnosis. In this article, we propose a new algorithm for the automated detection of hairs, using an adaptive, Canny edge-detection method, followed by morphological filtering and an arithmetic addition operation. The algorithm was applied to 50 dermoscopic melanoma images. In order to ascertain this method's relative detection accuracy, it was compared to the Razmjooy hair-detection method [1], using segmentation error (SE), true detection rate (TDR) and false positioning rate (FPR). The new method produced 6.57% SE, 96.28% TDR and 3.47% FPR, compared to 15.751% SE, 86.29% TDR and 11.74% FPR produced by the Razmjooy method [1]. Because of the 7.27-9.99% improvement in those parameters, we conclude that the new algorithm produces much better results for detecting thick, thin, dark and light hairs. The new method proposed here, shows an appreciable difference in the rate of detecting bubbles, as well.

  1. Results from CrIS/ATMS Obtained Using an AIRS "Version-6 Like" Retrieval Algorithm

    NASA Technical Reports Server (NTRS)

    Susskind, Joel; Kouvaris, Louis; Iredell, Lena

    2015-01-01

    We have tested and evaluated Version-6.22 AIRS and Version-6.22 CrIS products on a single day, December 4, 2013, and compared results to those derived using AIRS Version-6. AIRS and CrIS Version-6.22 O3(p) and q(p) products are both superior to those of AIRS Version-6All AIRS and CrIS products agree reasonably well with each other CrIS Version-6.22 T(p) and q(p) results are slightly poorer than AIRS under very cloudy conditions. Both AIRS and CrIS Version-6.22 run now at JPL. Our short term plans are to analyze many common months at JPL in the near future using Version-6.22 or a further improved algorithm to assess the compatibility of AIRS and CrIS monthly mean products and their interannual differencesUpdates to the calibration of both CrIS and ATMS are still being finalized. JPL plans, in collaboration with the Goddard DISC, to reprocess all AIRS data using a still to be finalized Version-7 retrieval algorithm, and to reprocess all recalibrated CrISATMS data using Version-7 as well.

  2. Results from CrIS/ATMS Obtained Using an AIRS "Version-6 like" Retrieval Algorithm

    NASA Technical Reports Server (NTRS)

    Susskind, Joel; Kouvaris, Louis; Iredell, Lena

    2015-01-01

    We tested and evaluated Version-6.22 AIRS and Version-6.22 CrIS products on a single day, December 4, 2013, and compared results to those derived using AIRS Version-6. AIRS and CrIS Version-6.22 O3(p) and q(p) products are both superior to those of AIRS Version-6All AIRS and CrIS products agree reasonably well with each other. CrIS Version-6.22 T(p) and q(p) results are slightly poorer than AIRS over land, especially under very cloudy conditions. Both AIRS and CrIS Version-6.22 run now at JPL. Our short term plans are to analyze many common months at JPL in the near future using Version-6.22 or a further improved algorithm to assess the compatibility of AIRS and CrIS monthly mean products and their interannual differences. Updates to the calibration of both CrIS and ATMS are still being finalized. JPL plans, in collaboration with the Goddard DISC, to reprocess all AIRS data using a still to be finalized Version-7 retrieval algorithm, and to reprocess all recalibrated CrISATMS data using Version-7 as well.

  3. Objective evaluation of linear and nonlinear tomosynthetic reconstruction algorithms

    NASA Astrophysics Data System (ADS)

    Webber, Richard L.; Hemler, Paul F.; Lavery, John E.

    2000-04-01

    This investigation objectively tests five different tomosynthetic reconstruction methods involving three different digital sensors, each used in a different radiologic application: chest, breast, and pelvis, respectively. The common task was to simulate a specific representative projection for each application by summation of appropriately shifted tomosynthetically generated slices produced by using the five algorithms. These algorithms were, respectively, (1) conventional back projection, (2) iteratively deconvoluted back projection, (3) a nonlinear algorithm similar to back projection, except that the minimum value from all of the component projections for each pixel is computed instead of the average value, (4) a similar algorithm wherein the maximum value was computed instead of the minimum value, and (5) the same type of algorithm except that the median value was computed. Using these five algorithms, we obtained data from each sensor-tissue combination, yielding three factorially distributed series of contiguous tomosynthetic slices. The respective slice stacks then were aligned orthogonally and averaged to yield an approximation of a single orthogonal projection radiograph of the complete (unsliced) tissue thickness. Resulting images were histogram equalized, and actual projection control images were subtracted from their tomosynthetically synthesized counterparts. Standard deviations of the resulting histograms were recorded as inverse figures of merit (FOMs). Visual rankings of image differences by five human observers of a subset (breast data only) also were performed to determine whether their subjective observations correlated with homologous FOMs. Nonparametric statistical analysis of these data demonstrated significant differences (P > 0.05) between reconstruction algorithms. The nonlinear minimization reconstruction method nearly always outperformed the other methods tested. Observer rankings were similar to those measured objectively.

  4. Algorithms and Results of Eye Tissues Differentiation Based on RF Ultrasound

    PubMed Central

    Jurkonis, R.; Janušauskas, A.; Marozas, V.; Jegelevičius, D.; Daukantas, S.; Patašius, M.; Paunksnis, A.; Lukoševičius, A.

    2012-01-01

    Algorithms and software were developed for analysis of B-scan ultrasonic signals acquired from commercial diagnostic ultrasound system. The algorithms process raw ultrasonic signals in backscattered spectrum domain, which is obtained using two time-frequency methods: short-time Fourier and Hilbert-Huang transformations. The signals from selected regions of eye tissues are characterized by parameters: B-scan envelope amplitude, approximated spectral slope, approximated spectral intercept, mean instantaneous frequency, mean instantaneous bandwidth, and parameters of Nakagami distribution characterizing Hilbert-Huang transformation output. The backscattered ultrasound signal parameters characterizing intraocular and orbit tissues were processed by decision tree data mining algorithm. The pilot trial proved that applied methods are able to correctly classify signals from corpus vitreum blood, extraocular muscle, and orbit tissues. In 26 cases of ocular tissues classification, one error occurred, when tissues were classified into classes of corpus vitreum blood, extraocular muscle, and orbit tissue. In this pilot classification parameters of spectral intercept and Nakagami parameter for instantaneous frequencies distribution of the 1st intrinsic mode function were found specific for corpus vitreum blood, orbit and extraocular muscle tissues. We conclude that ultrasound data should be further collected in clinical database to establish background for decision support system for ocular tissue noninvasive differentiation. PMID:22654643

  5. Image processing meta-algorithm development via genetic manipulation of existing algorithm graphs

    NASA Astrophysics Data System (ADS)

    Schalkoff, Robert J.; Shaaban, Khaled M.

    1999-07-01

    Automatic algorithm generation for image processing applications is not a new idea, however previous work is either restricted to morphological operates or impractical. In this paper, we show recent research result in the development and use of meta-algorithms, i.e. algorithms which lead to new algorithms. Although the concept is generally applicable, the application domain in this work is restricted to image processing. The meta-algorithm concept described in this paper is based upon out work in dynamic algorithm. The paper first present the concept of dynamic algorithms which, on the basis of training and archived algorithmic experience embedded in an algorithm graph (AG), dynamically adjust the sequence of operations applied to the input image data. Each node in the tree-based representation of a dynamic algorithm with out degree greater than 2 is a decision node. At these nodes, the algorithm examines the input data and determines which path will most likely achieve the desired results. This is currently done using nearest-neighbor classification. The details of this implementation are shown. The constrained perturbation of existing algorithm graphs, coupled with a suitable search strategy, is one mechanism to achieve meta-algorithm an doffers rich potential for the discovery of new algorithms. In our work, a meta-algorithm autonomously generates new dynamic algorithm graphs via genetic recombination of existing algorithm graphs. The AG representation is well suited to this genetic-like perturbation, using a commonly- employed technique in artificial neural network synthesis, namely the blueprint representation of graphs. A number of exam. One of the principal limitations of our current approach is the need for significant human input in the learning phase. Efforts to overcome this limitation are discussed. Future research directions are indicated.

  6. Algorithms for selecting informative marker panels for population assignment.

    PubMed

    Rosenberg, Noah A

    2005-11-01

    Given a set of potential source populations, genotypes of an individual of unknown origin at a collection of markers can be used to predict the correct source population of the individual. For improved efficiency, informative markers can be chosen from a larger set of markers to maximize the accuracy of this prediction. However, selecting the loci that are individually most informative does not necessarily produce the optimal panel. Here, using genotypes from eight species--carp, cat, chicken, dog, fly, grayling, human, and maize--this univariate accumulation procedure is compared to new multivariate "greedy" and "maximin" algorithms for choosing marker panels. The procedures generally suggest similar panels, although the greedy method often recommends inclusion of loci that are not chosen by the other algorithms. In seven of the eight species, when applied to five or more markers, all methods achieve at least 94% assignment accuracy on simulated individuals, with one species--dog--producing this level of accuracy with only three markers, and the eighth species--human--requiring approximately 13-16 markers. The new algorithms produce substantial improvements over use of randomly selected markers; where differences among the methods are noticeable, the greedy algorithm leads to slightly higher probabilities of correct assignment. Although none of the approaches necessarily chooses the panel with optimal performance, the algorithms all likely select panels with performance near enough to the maximum that they all are suitable for practical use.

  7. Highly Scalable Matching Pursuit Signal Decomposition Algorithm

    NASA Technical Reports Server (NTRS)

    Christensen, Daniel; Das, Santanu; Srivastava, Ashok N.

    2009-01-01

    Matching Pursuit Decomposition (MPD) is a powerful iterative algorithm for signal decomposition and feature extraction. MPD decomposes any signal into linear combinations of its dictionary elements or atoms . A best fit atom from an arbitrarily defined dictionary is determined through cross-correlation. The selected atom is subtracted from the signal and this procedure is repeated on the residual in the subsequent iterations until a stopping criterion is met. The reconstructed signal reveals the waveform structure of the original signal. However, a sufficiently large dictionary is required for an accurate reconstruction; this in return increases the computational burden of the algorithm, thus limiting its applicability and level of adoption. The purpose of this research is to improve the scalability and performance of the classical MPD algorithm. Correlation thresholds were defined to prune insignificant atoms from the dictionary. The Coarse-Fine Grids and Multiple Atom Extraction techniques were proposed to decrease the computational burden of the algorithm. The Coarse-Fine Grids method enabled the approximation and refinement of the parameters for the best fit atom. The ability to extract multiple atoms within a single iteration enhanced the effectiveness and efficiency of each iteration. These improvements were implemented to produce an improved Matching Pursuit Decomposition algorithm entitled MPD++. Disparate signal decomposition applications may require a particular emphasis of accuracy or computational efficiency. The prominence of the key signal features required for the proper signal classification dictates the level of accuracy necessary in the decomposition. The MPD++ algorithm may be easily adapted to accommodate the imposed requirements. Certain feature extraction applications may require rapid signal decomposition. The full potential of MPD++ may be utilized to produce incredible performance gains while extracting only slightly less energy than the

  8. Genetic Algorithm and Tabu Search for Vehicle Routing Problems with Stochastic Demand

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ismail, Zuhaimy, E-mail: zuhaimyi@yahoo.com, E-mail: irhamahn@yahoo.com; Irhamah, E-mail: zuhaimyi@yahoo.com, E-mail: irhamahn@yahoo.com

    2010-11-11

    This paper presents a problem of designing solid waste collection routes, involving scheduling of vehicles where each vehicle begins at the depot, visits customers and ends at the depot. It is modeled as a Vehicle Routing Problem with Stochastic Demands (VRPSD). A data set from a real world problem (a case) is used in this research. We developed Genetic Algorithm (GA) and Tabu Search (TS) procedure and these has produced the best possible result. The problem data are inspired by real case of VRPSD in waste collection. Results from the experiment show the advantages of the proposed algorithm that aremore » its robustness and better solution qualities.« less

  9. SU-E-J-97: Quality Assurance of Deformable Image Registration Algorithms: How Realistic Should Phantoms Be?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Saenz, D; Stathakis, S; Kirby, N

    Purpose: Deformable image registration (DIR) has widespread uses in radiotherapy for applications such as dose accumulation studies, multi-modality image fusion, and organ segmentation. The quality assurance (QA) of such algorithms, however, remains largely unimplemented. This work aims to determine how detailed a physical phantom needs to be to accurately perform QA of a DIR algorithm. Methods: Virtual prostate and head-and-neck phantoms, made from patient images, were used for this study. Both sets consist of an undeformed and deformed image pair. The images were processed to create additional image pairs with one through five homogeneous tissue levels using Otsu’s method. Realisticmore » noise was then added to each image. The DIR algorithms from MIM and Velocity (Deformable Multipass) were applied to the original phantom images and the processed ones. The resulting deformations were then compared to the known warping. A higher number of tissue levels creates more contrast in an image and enables DIR algorithms to produce more accurate results. For this reason, error (distance between predicted and known deformation) is utilized as a metric to evaluate how many levels are required for a phantom to be a realistic patient proxy. Results: For the prostate image pairs, the mean error decreased from 1–2 tissue levels and remained constant for 3+ levels. The mean error reduction was 39% and 26% for Velocity and MIM respectively. For head and neck, mean error fell similarly through 2 levels and flattened with total reduction of 16% and 49% for Velocity and MIM. For Velocity, 3+ levels produced comparable accuracy as the actual patient images, whereas MIM showed further accuracy improvement. Conclusion: The number of tissue levels needed to produce an accurate patient proxy depends on the algorithm. For Velocity, three levels were enough, whereas five was still insufficient for MIM.« less

  10. Landsat ecosystem disturbance adaptive processing system (LEDAPS) algorithm description

    USGS Publications Warehouse

    Schmidt, Gail; Jenkerson, Calli B.; Masek, Jeffrey; Vermote, Eric; Gao, Feng

    2013-01-01

    The Landsat Ecosystem Disturbance Adaptive Processing System (LEDAPS) software was originally developed by the National Aeronautics and Space Administration–Goddard Space Flight Center and the University of Maryland to produce top-of-atmosphere reflectance from LandsatThematic Mapper and Enhanced Thematic Mapper Plus Level 1 digital numbers and to apply atmospheric corrections to generate a surface-reflectance product.The U.S. Geological Survey (USGS) has adopted the LEDAPS algorithm for producing the Landsat Surface Reflectance Climate Data Record.This report discusses the LEDAPS algorithm, which was implemented by the USGS.

  11. Next Generation Aura-OMI SO2 Retrieval Algorithm: Introduction and Implementation Status

    NASA Technical Reports Server (NTRS)

    Li, Can; Joiner, Joanna; Krotkov, Nickolay A.; Bhartia, Pawan K.

    2014-01-01

    We introduce our next generation algorithm to retrieve SO2 using radiance measurements from the Aura Ozone Monitoring Instrument (OMI). We employ a principal component analysis technique to analyze OMI radiance spectral in 310.5-340 nm acquired over regions with no significant SO2. The resulting principal components (PCs) capture radiance variability caused by both physical processes (e.g., Rayleigh and Raman scattering, and ozone absorption) and measurement artifacts, enabling us to account for these various interferences in SO2 retrievals. By fitting these PCs along with SO2 Jacobians calculated with a radiative transfer model to OMI-measured radiance spectra, we directly estimate SO2 vertical column density in one step. As compared with the previous generation operational OMSO2 PBL (Planetary Boundary Layer) SO2 product, our new algorithm greatly reduces unphysical biases and decreases the noise by a factor of two, providing greater sensitivity to anthropogenic emissions. The new algorithm is fast, eliminates the need for instrument-specific radiance correction schemes, and can be easily adapted to other sensors. These attributes make it a promising technique for producing long-term, consistent SO2 records for air quality and climate research. We have operationally implemented this new algorithm on OMI SIPS for producing the new generation standard OMI SO2 products.

  12. Change Detection Algorithms for Surveillance in Visual IoT: A Comparative Study

    NASA Astrophysics Data System (ADS)

    Akram, Beenish Ayesha; Zafar, Amna; Akbar, Ali Hammad; Wajid, Bilal; Chaudhry, Shafique Ahmad

    2018-01-01

    The VIoT (Visual Internet of Things) connects virtual information world with real world objects using sensors and pervasive computing. For video surveillance in VIoT, ChD (Change Detection) is a critical component. ChD algorithms identify regions of change in multiple images of the same scene recorded at different time intervals for video surveillance. This paper presents performance comparison of histogram thresholding and classification ChD algorithms using quantitative measures for video surveillance in VIoT based on salient features of datasets. The thresholding algorithms Otsu, Kapur, Rosin and classification methods k-means, EM (Expectation Maximization) were simulated in MATLAB using diverse datasets. For performance evaluation, the quantitative measures used include OSR (Overall Success Rate), YC (Yule's Coefficient) and JC (Jaccard's Coefficient), execution time and memory consumption. Experimental results showed that Kapur's algorithm performed better for both indoor and outdoor environments with illumination changes, shadowing and medium to fast moving objects. However, it reflected degraded performance for small object size with minor changes. Otsu algorithm showed better results for indoor environments with slow to medium changes and nomadic object mobility. k-means showed good results in indoor environment with small object size producing slow change, no shadowing and scarce illumination changes.

  13. Morphological decomposition of 2-D binary shapes into convex polygons: a heuristic algorithm.

    PubMed

    Xu, J

    2001-01-01

    In many morphological shape decomposition algorithms, either a shape can only be decomposed into shape components of extremely simple forms or a time consuming search process is employed to determine a decomposition. In this paper, we present a morphological shape decomposition algorithm that decomposes a two-dimensional (2-D) binary shape into a collection of convex polygonal components. A single convex polygonal approximation for a given image is first identified. This first component is determined incrementally by selecting a sequence of basic shape primitives. These shape primitives are chosen based on shape information extracted from the given shape at different scale levels. Additional shape components are identified recursively from the difference image between the given image and the first component. Simple operations are used to repair certain concavities caused by the set difference operation. The resulting hierarchical structure provides descriptions for the given shape at different detail levels. The experiments show that the decomposition results produced by the algorithm seem to be in good agreement with the natural structures of the given shapes. The computational cost of the algorithm is significantly lower than that of an earlier search-based convex decomposition algorithm. Compared to nonconvex decomposition algorithms, our algorithm allows accurate approximations for the given shapes at low coding costs.

  14. Development and testing of incident detection algorithms. Vol. 2, research methodology and detailed results.

    DOT National Transportation Integrated Search

    1976-04-01

    The development and testing of incident detection algorithms was based on Los Angeles and Minneapolis freeway surveillance data. Algorithms considered were based on times series and pattern recognition techniques. Attention was given to the effects o...

  15. Potential for false positive HIV test results with the serial rapid HIV testing algorithm

    PubMed Central

    2012-01-01

    Background Rapid HIV tests provide same-day results and are widely used in HIV testing programs in areas with limited personnel and laboratory infrastructure. The Uganda Ministry of Health currently recommends the serial rapid testing algorithm with Determine, STAT-PAK, and Uni-Gold for diagnosis of HIV infection. Using this algorithm, individuals who test positive on Determine, negative to STAT-PAK and positive to Uni-Gold are reported as HIV positive. We conducted further testing on this subgroup of samples using qualitative DNA PCR to assess the potential for false positive tests in this situation. Results Of the 3388 individuals who were tested, 984 were HIV positive on two consecutive tests, and 29 were considered positive by a tiebreaker (positive on Determine, negative on STAT-PAK, and positive on Uni-Gold). However, when the 29 samples were further tested using qualitative DNA PCR, 14 (48.2%) were HIV negative. Conclusion Although this study was not primarily designed to assess the validity of rapid HIV tests and thus only a subset of the samples were retested, the findings show a potential for false positive HIV results in the subset of individuals who test positive when a tiebreaker test is used in serial testing. These findings highlight a need for confirmatory testing for this category of individuals. PMID:22429706

  16. Evaluation of an Area-Based matching algorithm with advanced shape models

    NASA Astrophysics Data System (ADS)

    Re, C.; Roncella, R.; Forlani, G.; Cremonese, G.; Naletto, G.

    2014-04-01

    Nowadays, the scientific institutions involved in planetary mapping are working on new strategies to produce accurate high resolution DTMs from space images at planetary scale, usually dealing with extremely large data volumes. From a methodological point of view, despite the introduction of a series of new algorithms for image matching (e.g. the Semi Global Matching) that yield superior results (especially because they produce usually smooth and continuous surfaces) with lower processing times, the preference in this field still goes to well established area-based matching techniques. Many efforts are consequently directed to improve each phase of the photogrammetric process, from image pre-processing to DTM interpolation. In this context, the Dense Matcher software (DM) developed at the University of Parma has been recently optimized to cope with very high resolution images provided by the most recent missions (LROC NAC and HiRISE) focusing the efforts mainly to the improvement of the correlation phase and the process automation. Important changes have been made to the correlation algorithm, still maintaining its high performance in terms of precision and accuracy, by implementing an advanced version of the Least Squares Matching (LSM) algorithm. In particular, an iterative algorithm has been developed to adapt the geometric transformation in image resampling using different shape functions as originally proposed by other authors in different applications.

  17. Performance analysis of unsupervised optimal fuzzy clustering algorithm for MRI brain tumor segmentation.

    PubMed

    Blessy, S A Praylin Selva; Sulochana, C Helen

    2015-01-01

    Segmentation of brain tumor from Magnetic Resonance Imaging (MRI) becomes very complicated due to the structural complexities of human brain and the presence of intensity inhomogeneities. To propose a method that effectively segments brain tumor from MR images and to evaluate the performance of unsupervised optimal fuzzy clustering (UOFC) algorithm for segmentation of brain tumor from MR images. Segmentation is done by preprocessing the MR image to standardize intensity inhomogeneities followed by feature extraction, feature fusion and clustering. Different validation measures are used to evaluate the performance of the proposed method using different clustering algorithms. The proposed method using UOFC algorithm produces high sensitivity (96%) and low specificity (4%) compared to other clustering methods. Validation results clearly show that the proposed method with UOFC algorithm effectively segments brain tumor from MR images.

  18. A dynamic scheduling algorithm for singe-arm two-cluster tools with flexible processing times

    NASA Astrophysics Data System (ADS)

    Li, Xin; Fung, Richard Y. K.

    2018-02-01

    This article presents a dynamic algorithm for job scheduling in two-cluster tools producing multi-type wafers with flexible processing times. Flexible processing times mean that the actual times for processing wafers should be within given time intervals. The objective of the work is to minimize the completion time of the newly inserted wafer. To deal with this issue, a two-cluster tool is decomposed into three reduced single-cluster tools (RCTs) in a series based on a decomposition approach proposed in this article. For each single-cluster tool, a dynamic scheduling algorithm based on temporal constraints is developed to schedule the newly inserted wafer. Three experiments have been carried out to test the dynamic scheduling algorithm proposed, comparing with the results the 'earliest starting time' heuristic (EST) adopted in previous literature. The results show that the dynamic algorithm proposed in this article is effective and practical.

  19. A hierarchical word-merging algorithm with class separability measure.

    PubMed

    Wang, Lei; Zhou, Luping; Shen, Chunhua; Liu, Lingqiao; Liu, Huan

    2014-03-01

    In image recognition with the bag-of-features model, a small-sized visual codebook is usually preferred to obtain a low-dimensional histogram representation and high computational efficiency. Such a visual codebook has to be discriminative enough to achieve excellent recognition performance. To create a compact and discriminative codebook, in this paper we propose to merge the visual words in a large-sized initial codebook by maximally preserving class separability. We first show that this results in a difficult optimization problem. To deal with this situation, we devise a suboptimal but very efficient hierarchical word-merging algorithm, which optimally merges two words at each level of the hierarchy. By exploiting the characteristics of the class separability measure and designing a novel indexing structure, the proposed algorithm can hierarchically merge 10,000 visual words down to two words in merely 90 seconds. Also, to show the properties of the proposed algorithm and reveal its advantages, we conduct detailed theoretical analysis to compare it with another hierarchical word-merging algorithm that maximally preserves mutual information, obtaining interesting findings. Experimental studies are conducted to verify the effectiveness of the proposed algorithm on multiple benchmark data sets. As shown, it can efficiently produce more compact and discriminative codebooks than the state-of-the-art hierarchical word-merging algorithms, especially when the size of the codebook is significantly reduced.

  20. Designing synthetic networks in silico: a generalised evolutionary algorithm approach.

    PubMed

    Smith, Robert W; van Sluijs, Bob; Fleck, Christian

    2017-12-02

    Evolution has led to the development of biological networks that are shaped by environmental signals. Elucidating, understanding and then reconstructing important network motifs is one of the principal aims of Systems & Synthetic Biology. Consequently, previous research has focused on finding optimal network structures and reaction rates that respond to pulses or produce stable oscillations. In this work we present a generalised in silico evolutionary algorithm that simultaneously finds network structures and reaction rates (genotypes) that can satisfy multiple defined objectives (phenotypes). The key step to our approach is to translate a schema/binary-based description of biological networks into systems of ordinary differential equations (ODEs). The ODEs can then be solved numerically to provide dynamic information about an evolved networks functionality. Initially we benchmark algorithm performance by finding optimal networks that can recapitulate concentration time-series data and perform parameter optimisation on oscillatory dynamics of the Repressilator. We go on to show the utility of our algorithm by finding new designs for robust synthetic oscillators, and by performing multi-objective optimisation to find a set of oscillators and feed-forward loops that are optimal at balancing different system properties. In sum, our results not only confirm and build on previous observations but we also provide new designs of synthetic oscillators for experimental construction. In this work we have presented and tested an evolutionary algorithm that can design a biological network to produce desired output. Given that previous designs of synthetic networks have been limited to subregions of network- and parameter-space, the use of our evolutionary optimisation algorithm will enable Synthetic Biologists to construct new systems with the potential to display a wider range of complex responses.

  1. [Knowledge of BLS and AED resuscitation algorithm amongst medical students--preliminary results].

    PubMed

    Chojnacki, Piotr; Ilieva, Rada; Kołodziej, Anna; Królikowska, Agata; Lipka, Jarosław; Ruta, Jaromir

    2011-01-01

    Early recognition of cardiac arrest (CA) and immediate commencement of resuscitation, may increase the survival rate among CA victims. We therefore conducted a survey among medical students to assess their knowledge of BLS and AED. The audit was performed among students, most of whom had completed at least one first aid course and those who had not done a first-aid course at all. The ERC-recommended questionnaire 2005 was used for the survey. One hundred and sixty five students completed the survey. Most of them recognized the usefulness of basic resuscitation algorithms and the use of AEDs. 88% of students recognized the importance offirst aid courses, and 91.6% would undertake them again. Despite obvious enthusiasm and self-declared adequate knowledge, 45.7% of the audited students were not familiar with the guidelines and answered wrongly to more than 6 of 12 questions in the questionnaire. The vast majority of the first year medical students were not familiar with the algorithms. We conclude that general knowledge of resuscitation algorithms among medical students is inadequate, and regular refresher courses are essential.

  2. Retrieval of aerosol optical properties using MERIS observations: Algorithm and some first results.

    PubMed

    Mei, Linlu; Rozanov, Vladimir; Vountas, Marco; Burrows, John P; Levy, Robert C; Lotz, Wolfhardt

    2017-08-01

    The MEdium Resolution Imaging Spectrometer (MERIS) instrument on board ESA Envisat made measurements from 2002 to 2012. Although MERIS was limited in spectral coverage, accurate Aerosol Optical Thickness (AOT) from MERIS data are retrieved by using appropriate additional information. We introduce a new AOT retrieval algorithm for MERIS over land surfaces, referred to as eXtensible Bremen AErosol Retrieval (XBAER). XBAER is similar to the "dark-target" (DT) retrieval algorithm used for Moderate-resolution Imaging Spectroradiometer (MODIS), in that it uses a lookup table (LUT) to match to satellite-observed reflectance and derive the AOT. Instead of a global parameterization of surface spectral reflectance, XBAER uses a set of spectral coefficients to prescribe surface properties. In this manner, XBAER is not limited to dark surfaces (vegetation) and retrieves AOT over bright surface (desert, semiarid, and urban areas). Preliminary validation of the MERIS-derived AOT and the ground-based Aerosol Robotic Network (AERONET) measurements yield good agreement, the resulting regression equation is y = (0.92 × ± 0.07) + (0.05 ± 0.01) and Pearson correlation coefficient of R = 0.78. Global monthly means of AOT have been compared from XBAER, MODIS and other satellite-derived datasets.

  3. Ares I-X Best Estimated Trajectory Analysis and Results

    NASA Technical Reports Server (NTRS)

    Karlgaard, Christopher D.; Beck, Roger E.; Starr, Brett R.; Derry, Stephen D.; Brandon, Jay; Olds, Aaron D.

    2011-01-01

    The Ares I-X trajectory reconstruction produced best estimated trajectories of the flight test vehicle ascent through stage separation, and of the first and upper stage entries after separation. The trajectory reconstruction process combines on-board, ground-based, and atmospheric measurements to produce the trajectory estimates. The Ares I-X vehicle had a number of on-board and ground based sensors that were available, including inertial measurement units, radar, air-data, and weather balloons. However, due to problems with calibrations and/or data, not all of the sensor data were used. The trajectory estimate was generated using an Iterative Extended Kalman Filter algorithm, which is an industry standard processing algorithm for filtering and estimation applications. This paper describes the methodology and results of the trajectory reconstruction process, including flight data preprocessing and input uncertainties, trajectory estimation algorithms, output transformations, and comparisons with preflight predictions.

  4. Optimal control of hybrid qubits: Implementing the quantum permutation algorithm

    NASA Astrophysics Data System (ADS)

    Rivera-Ruiz, C. M.; de Lima, E. F.; Fanchini, F. F.; Lopez-Richard, V.; Castelano, L. K.

    2018-03-01

    The optimal quantum control theory is employed to determine electric pulses capable of producing quantum gates with a fidelity higher than 0.9997, when noise is not taken into account. Particularly, these quantum gates were chosen to perform the permutation algorithm in hybrid qubits in double quantum dots (DQDs). The permutation algorithm is an oracle based quantum algorithm that solves the problem of the permutation parity faster than a classical algorithm without the necessity of entanglement between particles. The only requirement for achieving the speedup is the use of a one-particle quantum system with at least three levels. The high fidelity found in our results is closely related to the quantum speed limit, which is a measure of how fast a quantum state can be manipulated. Furthermore, we model charge noise by considering an average over the optimal field centered at different values of the reference detuning, which follows a Gaussian distribution. When the Gaussian spread is of the order of 5 μ eV (10% of the correct value), the fidelity is still higher than 0.95. Our scheme also can be used for the practical realization of different quantum algorithms in DQDs.

  5. Correlation between standard plate count and somatic cell count milk quality results for Wisconsin dairy producers.

    PubMed

    Borneman, Darand L; Ingham, Steve

    2014-05-01

    The objective of this study was to determine if a correlation exists between standard plate count (SPC) and somatic cell count (SCC) monthly reported results for Wisconsin dairy producers. Such a correlation may indicate that Wisconsin producers effectively controlling sanitation and milk temperature (reflected in low SPC) also have implemented good herd health management practices (reflected in low SCC). The SPC and SCC results for all grade A and B dairy producers who submitted results to the Wisconsin Department of Agriculture, Trade, and Consumer Protection, in each month of 2012 were analyzed. Grade A producer SPC results were less dispersed than grade B producer SPC results. Regression analysis showed a highly significant correlation between SPC and SCC, but the R(2) value was very small (0.02-0.03), suggesting that many other factors, besides SCC, influence SPC. Average SCC (across 12 mo) for grade A and B producers decreased with an increase in the number of monthly SPC results (out of 12) that were ≤ 25,000 cfu/mL. A chi-squared test of independence showed that the proportion of monthly SCC results >250,000 cells/mL varied significantly depending on whether the corresponding SPC result was ≤ 25,000 or >25,000 cfu/mL. This significant difference occurred in all months of 2012 for grade A and B producers. The results suggest that a generally consistent level of skill exists across dairy production practices affecting SPC and SCC. Copyright © 2014 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  6. Instances selection algorithm by ensemble margin

    NASA Astrophysics Data System (ADS)

    Saidi, Meryem; Bechar, Mohammed El Amine; Settouti, Nesma; Chikh, Mohamed Amine

    2018-05-01

    The main limit of data mining algorithms is their inability to deal with the huge amount of available data in a reasonable processing time. A solution of producing fast and accurate results is instances and features selection. This process eliminates noisy or redundant data in order to reduce the storage and computational cost without performances degradation. In this paper, a new instance selection approach called Ensemble Margin Instance Selection (EMIS) algorithm is proposed. This approach is based on the ensemble margin. To evaluate our approach, we have conducted several experiments on different real-world classification problems from UCI Machine learning repository. The pixel-based image segmentation is a field where the storage requirement and computational cost of applied model become higher. To solve these limitations we conduct a study based on the application of EMIS and other instance selection techniques for the segmentation and automatic recognition of white blood cells WBC (nucleus and cytoplasm) in cytological images.

  7. Objective performance assessment of five computed tomography iterative reconstruction algorithms.

    PubMed

    Omotayo, Azeez; Elbakri, Idris

    2016-11-22

    Iterative algorithms are gaining clinical acceptance in CT. We performed objective phantom-based image quality evaluation of five commercial iterative reconstruction algorithms available on four different multi-detector CT (MDCT) scanners at different dose levels as well as the conventional filtered back-projection (FBP) reconstruction. Using the Catphan500 phantom, we evaluated image noise, contrast-to-noise ratio (CNR), modulation transfer function (MTF) and noise-power spectrum (NPS). The algorithms were evaluated over a CTDIvol range of 0.75-18.7 mGy on four major MDCT scanners: GE DiscoveryCT750HD (algorithms: ASIR™ and VEO™); Siemens Somatom Definition AS+ (algorithm: SAFIRE™); Toshiba Aquilion64 (algorithm: AIDR3D™); and Philips Ingenuity iCT256 (algorithm: iDose4™). Images were reconstructed using FBP and the respective iterative algorithms on the four scanners. Use of iterative algorithms decreased image noise and increased CNR, relative to FBP. In the dose range of 1.3-1.5 mGy, noise reduction using iterative algorithms was in the range of 11%-51% on GE DiscoveryCT750HD, 10%-52% on Siemens Somatom Definition AS+, 49%-62% on Toshiba Aquilion64, and 13%-44% on Philips Ingenuity iCT256. The corresponding CNR increase was in the range 11%-105% on GE, 11%-106% on Siemens, 85%-145% on Toshiba and 13%-77% on Philips respectively. Most algorithms did not affect the MTF, except for VEO™ which produced an increase in the limiting resolution of up to 30%. A shift in the peak of the NPS curve towards lower frequencies and a decrease in NPS amplitude were obtained with all iterative algorithms. VEO™ required long reconstruction times, while all other algorithms produced reconstructions in real time. Compared to FBP, iterative algorithms reduced image noise and increased CNR. The iterative algorithms available on different scanners achieved different levels of noise reduction and CNR increase while spatial resolution improvements were obtained only with

  8. Potential for false positive HIV test results with the serial rapid HIV testing algorithm.

    PubMed

    Baveewo, Steven; Kamya, Moses R; Mayanja-Kizza, Harriet; Fatch, Robin; Bangsberg, David R; Coates, Thomas; Hahn, Judith A; Wanyenze, Rhoda K

    2012-03-19

    Rapid HIV tests provide same-day results and are widely used in HIV testing programs in areas with limited personnel and laboratory infrastructure. The Uganda Ministry of Health currently recommends the serial rapid testing algorithm with Determine, STAT-PAK, and Uni-Gold for diagnosis of HIV infection. Using this algorithm, individuals who test positive on Determine, negative to STAT-PAK and positive to Uni-Gold are reported as HIV positive. We conducted further testing on this subgroup of samples using qualitative DNA PCR to assess the potential for false positive tests in this situation. Of the 3388 individuals who were tested, 984 were HIV positive on two consecutive tests, and 29 were considered positive by a tiebreaker (positive on Determine, negative on STAT-PAK, and positive on Uni-Gold). However, when the 29 samples were further tested using qualitative DNA PCR, 14 (48.2%) were HIV negative. Although this study was not primarily designed to assess the validity of rapid HIV tests and thus only a subset of the samples were retested, the findings show a potential for false positive HIV results in the subset of individuals who test positive when a tiebreaker test is used in serial testing. These findings highlight a need for confirmatory testing for this category of individuals.

  9. Quality control algorithms for rainfall measurements

    NASA Astrophysics Data System (ADS)

    Golz, Claudia; Einfalt, Thomas; Gabella, Marco; Germann, Urs

    2005-09-01

    One of the basic requirements for a scientific use of rain data from raingauges, ground and space radars is data quality control. Rain data could be used more intensively in many fields of activity (meteorology, hydrology, etc.), if the achievable data quality could be improved. This depends on the available data quality delivered by the measuring devices and the data quality enhancement procedures. To get an overview of the existing algorithms a literature review and literature pool have been produced. The diverse algorithms have been evaluated to meet VOLTAIRE objectives and sorted in different groups. To test the chosen algorithms an algorithm pool has been established, where the software is collected. A large part of this work presented here is implemented in the scope of the EU-project VOLTAIRE ( Validati on of mu ltisensors precipit ation fields and numerical modeling in Mediter ran ean test sites).

  10. A multiobjective hybrid genetic algorithm for the capacitated multipoint network design problem.

    PubMed

    Lo, C C; Chang, W H

    2000-01-01

    The capacitated multipoint network design problem (CMNDP) is NP-complete. In this paper, a hybrid genetic algorithm for CMNDP is proposed. The multiobjective hybrid genetic algorithm (MOHGA) differs from other genetic algorithms (GAs) mainly in its selection procedure. The concept of subpopulation is used in MOHGA. Four subpopulations are generated according to the elitism reservation strategy, the shifting Prufer vector, the stochastic universal sampling, and the complete random method, respectively. Mixing these four subpopulations produces the next generation population. The MOHGA can effectively search the feasible solution space due to population diversity. The MOHGA has been applied to CMNDP. By examining computational and analytical results, we notice that the MOHGA can find most nondominated solutions and is much more effective and efficient than other multiobjective GAs.

  11. Machine-Learning Algorithms to Code Public Health Spending Accounts

    PubMed Central

    Leider, Jonathon P.; Resnick, Beth A.; Alfonso, Y. Natalia; Bishai, David

    2017-01-01

    Objectives: Government public health expenditure data sets require time- and labor-intensive manipulation to summarize results that public health policy makers can use. Our objective was to compare the performances of machine-learning algorithms with manual classification of public health expenditures to determine if machines could provide a faster, cheaper alternative to manual classification. Methods: We used machine-learning algorithms to replicate the process of manually classifying state public health expenditures, using the standardized public health spending categories from the Foundational Public Health Services model and a large data set from the US Census Bureau. We obtained a data set of 1.9 million individual expenditure items from 2000 to 2013. We collapsed these data into 147 280 summary expenditure records, and we followed a standardized method of manually classifying each expenditure record as public health, maybe public health, or not public health. We then trained 9 machine-learning algorithms to replicate the manual process. We calculated recall, precision, and coverage rates to measure the performance of individual and ensembled algorithms. Results: Compared with manual classification, the machine-learning random forests algorithm produced 84% recall and 91% precision. With algorithm ensembling, we achieved our target criterion of 90% recall by using a consensus ensemble of ≥6 algorithms while still retaining 93% coverage, leaving only 7% of the summary expenditure records unclassified. Conclusions: Machine learning can be a time- and cost-saving tool for estimating public health spending in the United States. It can be used with standardized public health spending categories based on the Foundational Public Health Services model to help parse public health expenditure information from other types of health-related spending, provide data that are more comparable across public health organizations, and evaluate the impact of evidence

  12. A novel discrete PSO algorithm for solving job shop scheduling problem to minimize makespan

    NASA Astrophysics Data System (ADS)

    Rameshkumar, K.; Rajendran, C.

    2018-02-01

    In this work, a discrete version of PSO algorithm is proposed to minimize the makespan of a job-shop. A novel schedule builder has been utilized to generate active schedules. The discrete PSO is tested using well known benchmark problems available in the literature. The solution produced by the proposed algorithms is compared with best known solution published in the literature and also compared with hybrid particle swarm algorithm and variable neighborhood search PSO algorithm. The solution construction methodology adopted in this study is found to be effective in producing good quality solutions for the various benchmark job-shop scheduling problems.

  13. A high-performance spatial database based approach for pathology imaging algorithm evaluation

    PubMed Central

    Wang, Fusheng; Kong, Jun; Gao, Jingjing; Cooper, Lee A.D.; Kurc, Tahsin; Zhou, Zhengwen; Adler, David; Vergara-Niedermayr, Cristobal; Katigbak, Bryan; Brat, Daniel J.; Saltz, Joel H.

    2013-01-01

    Background: Algorithm evaluation provides a means to characterize variability across image analysis algorithms, validate algorithms by comparison with human annotations, combine results from multiple algorithms for performance improvement, and facilitate algorithm sensitivity studies. The sizes of images and image analysis results in pathology image analysis pose significant challenges in algorithm evaluation. We present an efficient parallel spatial database approach to model, normalize, manage, and query large volumes of analytical image result data. This provides an efficient platform for algorithm evaluation. Our experiments with a set of brain tumor images demonstrate the application, scalability, and effectiveness of the platform. Context: The paper describes an approach and platform for evaluation of pathology image analysis algorithms. The platform facilitates algorithm evaluation through a high-performance database built on the Pathology Analytic Imaging Standards (PAIS) data model. Aims: (1) Develop a framework to support algorithm evaluation by modeling and managing analytical results and human annotations from pathology images; (2) Create a robust data normalization tool for converting, validating, and fixing spatial data from algorithm or human annotations; (3) Develop a set of queries to support data sampling and result comparisons; (4) Achieve high performance computation capacity via a parallel data management infrastructure, parallel data loading and spatial indexing optimizations in this infrastructure. Materials and Methods: We have considered two scenarios for algorithm evaluation: (1) algorithm comparison where multiple result sets from different methods are compared and consolidated; and (2) algorithm validation where algorithm results are compared with human annotations. We have developed a spatial normalization toolkit to validate and normalize spatial boundaries produced by image analysis algorithms or human annotations. The validated data

  14. A Coulomb collision algorithm for weighted particle simulations

    NASA Technical Reports Server (NTRS)

    Miller, Ronald H.; Combi, Michael R.

    1994-01-01

    A binary Coulomb collision algorithm is developed for weighted particle simulations employing Monte Carlo techniques. Charged particles within a given spatial grid cell are pair-wise scattered, explicitly conserving momentum and implicitly conserving energy. A similar algorithm developed by Takizuka and Abe (1977) conserves momentum and energy provided the particles are unweighted (each particle representing equal fractions of the total particle density). If applied as is to simulations incorporating weighted particles, the plasma temperatures equilibrate to an incorrect temperature, as compared to theory. Using the appropriate pairing statistics, a Coulomb collision algorithm is developed for weighted particles. The algorithm conserves energy and momentum and produces the appropriate relaxation time scales as compared to theoretical predictions. Such an algorithm is necessary for future work studying self-consistent multi-species kinetic transport.

  15. NRGC: a novel referential genome compression algorithm.

    PubMed

    Saha, Subrata; Rajasekaran, Sanguthevar

    2016-11-15

    Next-generation sequencing techniques produce millions to billions of short reads. The procedure is not only very cost effective but also can be done in laboratory environment. The state-of-the-art sequence assemblers then construct the whole genomic sequence from these reads. Current cutting edge computing technology makes it possible to build genomic sequences from the billions of reads within a minimal cost and time. As a consequence, we see an explosion of biological sequences in recent times. In turn, the cost of storing the sequences in physical memory or transmitting them over the internet is becoming a major bottleneck for research and future medical applications. Data compression techniques are one of the most important remedies in this context. We are in need of suitable data compression algorithms that can exploit the inherent structure of biological sequences. Although standard data compression algorithms are prevalent, they are not suitable to compress biological sequencing data effectively. In this article, we propose a novel referential genome compression algorithm (NRGC) to effectively and efficiently compress the genomic sequences. We have done rigorous experiments to evaluate NRGC by taking a set of real human genomes. The simulation results show that our algorithm is indeed an effective genome compression algorithm that performs better than the best-known algorithms in most of the cases. Compression and decompression times are also very impressive. The implementations are freely available for non-commercial purposes. They can be downloaded from: http://www.engr.uconn.edu/~rajasek/NRGC.zip CONTACT: rajasek@engr.uconn.edu. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  16. Adaptive phase k-means algorithm for waveform classification

    NASA Astrophysics Data System (ADS)

    Song, Chengyun; Liu, Zhining; Wang, Yaojun; Xu, Feng; Li, Xingming; Hu, Guangmin

    2018-01-01

    Waveform classification is a powerful technique for seismic facies analysis that describes the heterogeneity and compartments within a reservoir. Horizon interpretation is a critical step in waveform classification. However, the horizon often produces inconsistent waveform phase, and thus results in an unsatisfied classification. To alleviate this problem, an adaptive phase waveform classification method called the adaptive phase k-means is introduced in this paper. Our method improves the traditional k-means algorithm using an adaptive phase distance for waveform similarity measure. The proposed distance is a measure with variable phases as it moves from sample to sample along the traces. Model traces are also updated with the best phase interference in the iterative process. Therefore, our method is robust to phase variations caused by the interpretation horizon. We tested the effectiveness of our algorithm by applying it to synthetic and real data. The satisfactory results reveal that the proposed method tolerates certain waveform phase variation and is a good tool for seismic facies analysis.

  17. A novel orthoimage mosaic method using the weighted A* algorithm for UAV imagery

    NASA Astrophysics Data System (ADS)

    Zheng, Maoteng; Zhou, Shunping; Xiong, Xiaodong; Zhu, Junfeng

    2017-12-01

    A weighted A* algorithm is proposed to select optimal seam-lines in orthoimage mosaic for UAV (Unmanned Aircraft Vehicle) imagery. The whole workflow includes four steps: the initial seam-line network is firstly generated by standard Voronoi Diagram algorithm; an edge diagram is then detected based on DSM (Digital Surface Model) data; the vertices (conjunction nodes) of initial network are relocated since some of them are on the high objects (buildings, trees and other artificial structures); and, the initial seam-lines are finally refined using the weighted A* algorithm based on the edge diagram and the relocated vertices. The method was tested with two real UAV datasets. Preliminary results show that the proposed method produces acceptable mosaic images in both the urban and mountainous areas, and is better than the result of the state-of-the-art methods on the datasets.

  18. An index of refraction algorithm for seawater over temperature, pressure, salinity, density, and wavelength

    NASA Astrophysics Data System (ADS)

    Millard, R. C.; Seaver, G.

    1990-12-01

    A 27-term index of refraction algorithm for pure and sea waters has been developed using four experimental data sets of differing accuracies. They cover the range 500-700 nm in wavelength, 0-30°C in temperature, 0-40 psu in salinity, and 0-11,000 db in pressure. The index of refraction algorithm has an accuracy that varies from 0.4 ppm for pure water at atmospheric pressure to 80 ppm at high pressures, but preserves the accuracy of each original data set. This algorithm is a significant improvement over existing descriptions as it is in analytical form with a better and more carefully defined accuracy. A salinometer algorithm with the same uncertainty has been created by numerically inverting the index algorithm using the Newton-Raphson method. The 27-term index algorithm was used to generate a pseudo-data set at the sodium D wavelength (589.26 nm) from which a 6-term densitometer algorithm was constructed. The densitometer algorithm also produces salinity as an intermediate step in the salinity inversion. The densitometer residuals have a standard deviation of 0.049 kg m -3 which is not accurate enough for most oceanographic applications. However, the densitometer algorithm was used to explore the sensitivity of density from this technique to temperature and pressure uncertainties. To achieve a deep ocean densitometer of 0.001 kg m -3 accuracy would require the index of refraction to have an accuracy of 0.3 ppm, the temperature an accuracy of 0.01°C and the pressure 1 db. Our assessment of the currently available index of refraction measurements finds that only the data for fresh water at atmospheric pressure produce an algorithm satisfactory for oceanographic use (density to 0.4 ppm). The data base for the algorithm at higher pressures and various salinities requires an order of magnitude or better improvement in index measurement accuracy before the resultant density accuracy will be comparable to the currently available oceanographic algorithm.

  19. DNA-based watermarks using the DNA-Crypt algorithm

    PubMed Central

    Heider, Dominik; Barnekow, Angelika

    2007-01-01

    Background The aim of this paper is to demonstrate the application of watermarks based on DNA sequences to identify the unauthorized use of genetically modified organisms (GMOs) protected by patents. Predicted mutations in the genome can be corrected by the DNA-Crypt program leaving the encrypted information intact. Existing DNA cryptographic and steganographic algorithms use synthetic DNA sequences to store binary information however, although these sequences can be used for authentication, they may change the target DNA sequence when introduced into living organisms. Results The DNA-Crypt algorithm and image steganography are based on the same watermark-hiding principle, namely using the least significant base in case of DNA-Crypt and the least significant bit in case of the image steganography. It can be combined with binary encryption algorithms like AES, RSA or Blowfish. DNA-Crypt is able to correct mutations in the target DNA with several mutation correction codes such as the Hamming-code or the WDH-code. Mutations which can occur infrequently may destroy the encrypted information, however an integrated fuzzy controller decides on a set of heuristics based on three input dimensions, and recommends whether or not to use a correction code. These three input dimensions are the length of the sequence, the individual mutation rate and the stability over time, which is represented by the number of generations. In silico experiments using the Ypt7 in Saccharomyces cerevisiae shows that the DNA watermarks produced by DNA-Crypt do not alter the translation of mRNA into protein. Conclusion The program is able to store watermarks in living organisms and can maintain the original information by correcting mutations itself. Pairwise or multiple sequence alignments show that DNA-Crypt produces few mismatches between the sequences similar to all steganographic algorithms. PMID:17535434

  20. Highly Efficient Compression Algorithms for Multichannel EEG.

    PubMed

    Shaw, Laxmi; Rahman, Daleef; Routray, Aurobinda

    2018-05-01

    The difficulty associated with processing and understanding the high dimensionality of electroencephalogram (EEG) data requires developing efficient and robust compression algorithms. In this paper, different lossless compression techniques of single and multichannel EEG data, including Huffman coding, arithmetic coding, Markov predictor, linear predictor, context-based error modeling, multivariate autoregression (MVAR), and a low complexity bivariate model have been examined and their performances have been compared. Furthermore, a high compression algorithm named general MVAR and a modified context-based error modeling for multichannel EEG have been proposed. The resulting compression algorithm produces a higher relative compression ratio of 70.64% on average compared with the existing methods, and in some cases, it goes up to 83.06%. The proposed methods are designed to compress a large amount of multichannel EEG data efficiently so that the data storage and transmission bandwidth can be effectively used. These methods have been validated using several experimental multichannel EEG recordings of different subjects and publicly available standard databases. The satisfactory parametric measures of these methods, namely percent-root-mean square distortion, peak signal-to-noise ratio, root-mean-square error, and cross correlation, show their superiority over the state-of-the-art compression methods.

  1. On factoring RSA modulus using random-restart hill-climbing algorithm and Pollard’s rho algorithm

    NASA Astrophysics Data System (ADS)

    Budiman, M. A.; Rachmawati, D.

    2017-12-01

    The security of the widely-used RSA public key cryptography algorithm depends on the difficulty of factoring a big integer into two large prime numbers. For many years, the integer factorization problem has been intensively and extensively studied in the field of number theory. As a result, a lot of deterministic algorithms such as Euler’s algorithm, Kraitchik’s, and variants of Pollard’s algorithms have been researched comprehensively. Our study takes a rather uncommon approach: rather than making use of intensive number theories, we attempt to factorize RSA modulus n by using random-restart hill-climbing algorithm, which belongs the class of metaheuristic algorithms. The factorization time of RSA moduli with different lengths is recorded and compared with the factorization time of Pollard’s rho algorithm, which is a deterministic algorithm. Our experimental results indicates that while random-restart hill-climbing algorithm is an acceptable candidate to factorize smaller RSA moduli, the factorization speed is much slower than that of Pollard’s rho algorithm.

  2. Stride search: A general algorithm for storm detection in high-resolution climate data

    DOE PAGES

    Bosler, Peter A.; Roesler, Erika L.; Taylor, Mark A.; ...

    2016-04-13

    This study discusses the problem of identifying extreme climate events such as intense storms within large climate data sets. The basic storm detection algorithm is reviewed, which splits the problem into two parts: a spatial search followed by a temporal correlation problem. Two specific implementations of the spatial search algorithm are compared: the commonly used grid point search algorithm is reviewed, and a new algorithm called Stride Search is introduced. The Stride Search algorithm is defined independently of the spatial discretization associated with a particular data set. Results from the two algorithms are compared for the application of tropical cyclonemore » detection, and shown to produce similar results for the same set of storm identification criteria. Differences between the two algorithms arise for some storms due to their different definition of search regions in physical space. The physical space associated with each Stride Search region is constant, regardless of data resolution or latitude, and Stride Search is therefore capable of searching all regions of the globe in the same manner. Stride Search's ability to search high latitudes is demonstrated for the case of polar low detection. Wall clock time required for Stride Search is shown to be smaller than a grid point search of the same data, and the relative speed up associated with Stride Search increases as resolution increases.« less

  3. Comparing genetic algorithm and particle swarm optimization for solving capacitated vehicle routing problem

    NASA Astrophysics Data System (ADS)

    Iswari, T.; Asih, A. M. S.

    2018-04-01

    In the logistics system, transportation plays an important role to connect every element in the supply chain, but it can produces the greatest cost. Therefore, it is important to make the transportation costs as minimum as possible. Reducing the transportation cost can be done in several ways. One of the ways to minimizing the transportation cost is by optimizing the routing of its vehicles. It refers to Vehicle Routing Problem (VRP). The most common type of VRP is Capacitated Vehicle Routing Problem (CVRP). In CVRP, the vehicles have their own capacity and the total demands from the customer should not exceed the capacity of the vehicle. CVRP belongs to the class of NP-hard problems. These NP-hard problems make it more complex to solve such that exact algorithms become highly time-consuming with the increases in problem sizes. Thus, for large-scale problem instances, as typically found in industrial applications, finding an optimal solution is not practicable. Therefore, this paper uses two kinds of metaheuristics approach to solving CVRP. Those are Genetic Algorithm and Particle Swarm Optimization. This paper compares the results of both algorithms and see the performance of each algorithm. The results show that both algorithms perform well in solving CVRP but still needs to be improved. From algorithm testing and numerical example, Genetic Algorithm yields a better solution than Particle Swarm Optimization in total distance travelled.

  4. Towards enhancement of performance of K-means clustering using nature-inspired optimization algorithms.

    PubMed

    Fong, Simon; Deb, Suash; Yang, Xin-She; Zhuang, Yan

    2014-01-01

    Traditional K-means clustering algorithms have the drawback of getting stuck at local optima that depend on the random values of initial centroids. Optimization algorithms have their advantages in guiding iterative computation to search for global optima while avoiding local optima. The algorithms help speed up the clustering process by converging into a global optimum early with multiple search agents in action. Inspired by nature, some contemporary optimization algorithms which include Ant, Bat, Cuckoo, Firefly, and Wolf search algorithms mimic the swarming behavior allowing them to cooperatively steer towards an optimal objective within a reasonable time. It is known that these so-called nature-inspired optimization algorithms have their own characteristics as well as pros and cons in different applications. When these algorithms are combined with K-means clustering mechanism for the sake of enhancing its clustering quality by avoiding local optima and finding global optima, the new hybrids are anticipated to produce unprecedented performance. In this paper, we report the results of our evaluation experiments on the integration of nature-inspired optimization methods into K-means algorithms. In addition to the standard evaluation metrics in evaluating clustering quality, the extended K-means algorithms that are empowered by nature-inspired optimization methods are applied on image segmentation as a case study of application scenario.

  5. An Algorithm for Interactive Modeling of Space-Transportation Engine Simulations: A Constraint Satisfaction Approach

    NASA Technical Reports Server (NTRS)

    Mitra, Debasis; Thomas, Ajai; Hemminger, Joseph; Sakowski, Barbara

    2001-01-01

    In this research we have developed an algorithm for the purpose of constraint processing by utilizing relational algebraic operators. Van Beek and others have investigated in the past this type of constraint processing from within a relational algebraic framework, producing some unique results. Apart from providing new theoretical angles, this approach also gives the opportunity to use the existing efficient implementations of relational database management systems as the underlying data structures for any relevant algorithm. Our algorithm here enhances that framework. The algorithm is quite general in its current form. Weak heuristics (like forward checking) developed within the Constraint-satisfaction problem (CSP) area could be also plugged easily within this algorithm for further enhancements of efficiency. The algorithm as developed here is targeted toward a component-oriented modeling problem that we are currently working on, namely, the problem of interactive modeling for batch-simulation of engineering systems (IMBSES). However, it could be adopted for many other CSP problems as well. The research addresses the algorithm and many aspects of the problem IMBSES that we are currently handling.

  6. Robust moving mesh algorithms for hybrid stretched meshes: Application to moving boundaries problems

    NASA Astrophysics Data System (ADS)

    Landry, Jonathan; Soulaïmani, Azzeddine; Luke, Edward; Ben Haj Ali, Amine

    2016-12-01

    A robust Mesh-Mover Algorithm (MMA) approach is designed to adapt meshes of moving boundaries problems. A new methodology is developed from the best combination of well-known algorithms in order to preserve the quality of initial meshes. In most situations, MMAs distribute mesh deformation while preserving a good mesh quality. However, invalid meshes are generated when the motion is complex and/or involves multiple bodies. After studying a few MMA limitations, we propose the following approach: use the Inverse Distance Weighting (IDW) function to produce the displacement field, then apply the Geometric Element Transformation Method (GETMe) smoothing algorithms to improve the resulting mesh quality, and use an untangler to revert negative elements. The proposed approach has been proven efficient to adapt meshes for various realistic aerodynamic motions: a symmetric wing that has suffered large tip bending and twisting and the high-lift components of a swept wing that has moved to different flight stages. Finally, the fluid flow problem has been solved on meshes that have moved and they have produced results close to experimental ones. However, for situations where moving boundaries are too close to each other, more improvements need to be made or other approaches should be taken, such as an overset grid method.

  7. Towards a Framework for Evaluating and Comparing Diagnosis Algorithms

    NASA Technical Reports Server (NTRS)

    Kurtoglu, Tolga; Narasimhan, Sriram; Poll, Scott; Garcia,David; Kuhn, Lukas; deKleer, Johan; vanGemund, Arjan; Feldman, Alexander

    2009-01-01

    Diagnostic inference involves the detection of anomalous system behavior and the identification of its cause, possibly down to a failed unit or to a parameter of a failed unit. Traditional approaches to solving this problem include expert/rule-based, model-based, and data-driven methods. Each approach (and various techniques within each approach) use different representations of the knowledge required to perform the diagnosis. The sensor data is expected to be combined with these internal representations to produce the diagnosis result. In spite of the availability of various diagnosis technologies, there have been only minimal efforts to develop a standardized software framework to run, evaluate, and compare different diagnosis technologies on the same system. This paper presents a framework that defines a standardized representation of the system knowledge, the sensor data, and the form of the diagnosis results and provides a run-time architecture that can execute diagnosis algorithms, send sensor data to the algorithms at appropriate time steps from a variety of sources (including the actual physical system), and collect resulting diagnoses. We also define a set of metrics that can be used to evaluate and compare the performance of the algorithms, and provide software to calculate the metrics.

  8. Optimisation algorithms for ECG data compression.

    PubMed

    Haugland, D; Heber, J G; Husøy, J H

    1997-07-01

    The use of exact optimisation algorithms for compressing digital electrocardiograms (ECGs) is demonstrated. As opposed to traditional time-domain methods, which use heuristics to select a small subset of representative signal samples, the problem of selecting the subset is formulated in rigorous mathematical terms. This approach makes it possible to derive algorithms guaranteeing the smallest possible reconstruction error when a bounded selection of signal samples is interpolated. The proposed model resembles well-known network models and is solved by a cubic dynamic programming algorithm. When applied to standard test problems, the algorithm produces a compressed representation for which the distortion is about one-half of that obtained by traditional time-domain compression techniques at reasonable compression ratios. This illustrates that, in terms of the accuracy of decoded signals, existing time-domain heuristics for ECG compression may be far from what is theoretically achievable. The paper is an attempt to bridge this gap.

  9. Three-dimensional ophthalmic optical coherence tomography with a refraction correction algorithm

    NASA Astrophysics Data System (ADS)

    Zawadzki, Robert J.; Leisser, Christoph; Leitgeb, Rainer; Pircher, Michael; Fercher, Adolf F.

    2003-10-01

    We built an optical coherence tomography (OCT) system with a rapid scanning optical delay (RSOD) line, which allows probing full axial eye length. The system produces Three-dimensional (3D) data sets that are used to generate 3D tomograms of the model eye. The raw tomographic data were processed by an algorithm, which is based on Snell"s law to correct the interface positions. The Zernike polynomials representation of the interfaces allows quantitative wave aberration measurements. 3D images of our results are presented to illustrate the capabilities of the system and the algorithm performance. The system allows us to measure intra-ocular distances.

  10. Creation and Validation of an Automated Algorithm to Determine Postoperative Ventilator Requirements After Cardiac Surgery.

    PubMed

    Gabel, Eilon; Hofer, Ira S; Satou, Nancy; Grogan, Tristan; Shemin, Richard; Mahajan, Aman; Cannesson, Maxime

    2017-05-01

    In medical practice today, clinical data registries have become a powerful tool for measuring and driving quality improvement, especially among multicenter projects. Registries face the known problem of trying to create dependable and clear metrics from electronic medical records data, which are typically scattered and often based on unreliable data sources. The Society for Thoracic Surgery (STS) is one such example, and it supports manually collected data by trained clinical staff in an effort to obtain the highest-fidelity data possible. As a possible alternative, our team designed an algorithm to test the feasibility of producing computer-derived data for the case of postoperative mechanical ventilation hours. In this article, we study and compare the accuracy of algorithm-derived mechanical ventilation data with manual data extraction. We created a novel algorithm that is able to calculate mechanical ventilation duration for any postoperative patient using raw data from our EPIC electronic medical record. Utilizing nursing documentation of airway devices, documentation of lines, drains, and airways, and respiratory therapist ventilator settings, the algorithm produced results that were then validated against the STS registry. This enabled us to compare our algorithm results with data collected by human chart review. Any discrepancies were then resolved with manual calculation by a research team member. The STS registry contained a total of 439 University of California Los Angeles cardiac cases from April 1, 2013, to March 31, 2014. After excluding 201 patients for not remaining intubated, tracheostomy use, or for having 2 surgeries on the same day, 238 cases met inclusion criteria. Comparing the postoperative ventilation durations between the 2 data sources resulted in 158 (66%) ventilation durations agreeing within 1 hour, indicating a probable correct value for both sources. Among the discrepant cases, the algorithm yielded results that were exclusively

  11. How Effective Is Algorithm-Guided Treatment for Depressed Inpatients? Results from the Randomized Controlled Multicenter German Algorithm Project 3 Trial.

    PubMed

    Adli, Mazda; Wiethoff, Katja; Baghai, Thomas C; Fisher, Robert; Seemüller, Florian; Laakmann, Gregor; Brieger, Peter; Cordes, Joachim; Malevani, Jaroslav; Laux, Gerd; Hauth, Iris; Möller, Hans-Jürgen; Kronmüller, Klaus-Thomas; Smolka, Michael N; Schlattmann, Peter; Berger, Maximilian; Ricken, Roland; Stamm, Thomas J; Heinz, Andreas; Bauer, Michael

    2017-09-01

    Treatment algorithms are considered as key to improve outcomes by enhancing the quality of care. This is the first randomized controlled study to evaluate the clinical effect of algorithm-guided treatment in inpatients with major depressive disorder. Inpatients, aged 18 to 70 years with major depressive disorder from 10 German psychiatric departments were randomized to 5 different treatment arms (from 2000 to 2005), 3 of which were standardized stepwise drug treatment algorithms (ALGO). The fourth arm proposed medications and provided less specific recommendations based on a computerized documentation and expert system (CDES), the fifth arm received treatment as usual (TAU). ALGO included 3 different second-step strategies: lithium augmentation (ALGO LA), antidepressant dose-escalation (ALGO DE), and switch to a different antidepressant (ALGO SW). Time to remission (21-item Hamilton Depression Rating Scale ≤9) was the primary outcome. Time to remission was significantly shorter for ALGO DE (n=91) compared with both TAU (n=84) (HR=1.67; P=.014) and CDES (n=79) (HR=1.59; P=.031) and ALGO SW (n=89) compared with both TAU (HR=1.64; P=.018) and CDES (HR=1.56; P=.038). For both ALGO LA (n=86) and ALGO DE, fewer antidepressant medications were needed to achieve remission than for CDES or TAU (P<.001). Remission rates at discharge differed across groups; ALGO DE had the highest (89.2%) and TAU the lowest rates (66.2%). A highly structured algorithm-guided treatment is associated with shorter times and fewer medication changes to achieve remission with depressed inpatients than treatment as usual or computerized medication choice guidance. © The Author 2017. Published by Oxford University Press on behalf of CINP.

  12. Accounting for hardware imperfections in EIT image reconstruction algorithms.

    PubMed

    Hartinger, Alzbeta E; Gagnon, Hervé; Guardo, Robert

    2007-07-01

    Electrical impedance tomography (EIT) is a non-invasive technique for imaging the conductivity distribution of a body section. Different types of EIT images can be reconstructed: absolute, time difference and frequency difference. Reconstruction algorithms are sensitive to many errors which translate into image artefacts. These errors generally result from incorrect modelling or inaccurate measurements. Every reconstruction algorithm incorporates a model of the physical set-up which must be as accurate as possible since any discrepancy with the actual set-up will cause image artefacts. Several methods have been proposed in the literature to improve the model realism, such as creating anatomical-shaped meshes, adding a complete electrode model and tracking changes in electrode contact impedances and positions. Absolute and frequency difference reconstruction algorithms are particularly sensitive to measurement errors and generally assume that measurements are made with an ideal EIT system. Real EIT systems have hardware imperfections that cause measurement errors. These errors translate into image artefacts since the reconstruction algorithm cannot properly discriminate genuine measurement variations produced by the medium under study from those caused by hardware imperfections. We therefore propose a method for eliminating these artefacts by integrating a model of the system hardware imperfections into the reconstruction algorithms. The effectiveness of the method has been evaluated by reconstructing absolute, time difference and frequency difference images with and without the hardware model from data acquired on a resistor mesh phantom. Results have shown that artefacts are smaller for images reconstructed with the model, especially for frequency difference imaging.

  13. A grid layout algorithm for automatic drawing of biochemical networks.

    PubMed

    Li, Weijiang; Kurata, Hiroyuki

    2005-05-01

    Visualization is indispensable in the research of complex biochemical networks. Available graph layout algorithms are not adequate for satisfactorily drawing such networks. New methods are required to visualize automatically the topological architectures and facilitate the understanding of the functions of the networks. We propose a novel layout algorithm to draw complex biochemical networks. A network is modeled as a system of interacting nodes on squared grids. A discrete cost function between each node pair is designed based on the topological relation and the geometric positions of the two nodes. The layouts are produced by minimizing the total cost. We design a fast algorithm to minimize the discrete cost function, by which candidate layouts can be produced efficiently. A simulated annealing procedure is used to choose better candidates. Our algorithm demonstrates its ability to exhibit cluster structures clearly in relatively compact layout areas without any prior knowledge. We developed Windows software to implement the algorithm for CADLIVE. All materials can be freely downloaded from http://kurata21.bio.kyutech.ac.jp/grid/grid_layout.htm; http://www.cadlive.jp/ http://kurata21.bio.kyutech.ac.jp/grid/grid_layout.htm; http://www.cadlive.jp/

  14. Simple Estimation of Incident HIV Infection Rates in Notification Cohorts Based on Window Periods of Algorithms for Evaluation of Line-Immunoassay Result Patterns

    PubMed Central

    Schüpbach, Jörg; Gebhardt, Martin D.; Scherrer, Alexandra U.; Bisset, Leslie R.; Niederhauser, Christoph; Regenass, Stephan; Yerly, Sabine; Aubert, Vincent; Suter, Franziska; Pfister, Stefan; Martinetti, Gladys; Andreutti, Corinne; Klimkait, Thomas; Brandenberger, Marcel; Günthard, Huldrych F.

    2013-01-01

    Background Tests for recent infections (TRIs) are important for HIV surveillance. We have shown that a patient's antibody pattern in a confirmatory line immunoassay (Inno-Lia) also yields information on time since infection. We have published algorithms which, with a certain sensitivity and specificity, distinguish between incident (< = 12 months) and older infection. In order to use these algorithms like other TRIs, i.e., based on their windows, we now determined their window periods. Methods We classified Inno-Lia results of 527 treatment-naïve patients with HIV-1 infection < = 12 months according to incidence by 25 algorithms. The time after which all infections were ruled older, i.e. the algorithm's window, was determined by linear regression of the proportion ruled incident in dependence of time since infection. Window-based incident infection rates (IIR) were determined utilizing the relationship ‘Prevalence  =  Incidence x Duration’ in four annual cohorts of HIV-1 notifications. Results were compared to performance-based IIR also derived from Inno-Lia results, but utilizing the relationship ‘incident  =  true incident + false incident’ and also to the IIR derived from the BED incidence assay. Results Window periods varied between 45.8 and 130.1 days and correlated well with the algorithms' diagnostic sensitivity (R2 = 0.962; P<0.0001). Among the 25 algorithms, the mean window-based IIR among the 748 notifications of 2005/06 was 0.457 compared to 0.453 obtained for performance-based IIR with a model not correcting for selection bias. Evaluation of BED results using a window of 153 days yielded an IIR of 0.669. Window-based IIR and performance-based IIR increased by 22.4% and respectively 30.6% in 2008, while 2009 and 2010 showed a return to baseline for both methods. Conclusions IIR estimations by window- and performance-based evaluations of Inno-Lia algorithm results were similar and can be used together to assess IIR changes

  15. SU-F-J-76: Evaluation of the Performance of Different Deformable Image Registration Algorithms in Helical, Axial and Cone-Beam CT Images of a Mobile Phantom

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jaskowiak, J; Ahmad, S; Ali, I

    Purpose: To investigate quantitatively the performance of different deformable-image-registration algorithms (DIR) with helical (HCT), axial (ACT) and cone-beam CT (CBCT) by evaluating the variations in the CT-numbers and lengths of targets moving with controlled motion-patterns. Methods: Four DIR-algorithms including demons, fast-demons, Horn-Schunk and Locas-Kanade from the DIRART-software are used to register CT-images of a mobile-phantom. A mobile-phantom is scanned with different imaging techniques that include helical, axial and cone-beam CT. The phantom includes three targets with different lengths that are made from water-equivalent material and inserted in low-density-foam which is moved with adjustable motion-amplitudes and frequencies. Results: Most of themore » DIR-algorithms are able to produce the lengths of the stationary-targets, however, they do not produce the CT-number values in CBCT. The image-artifacts induced by motion are more regular in CBCT imaging where the mobile-target elongation increases linearly with motion-amplitude. In ACT and HCT, the motion-artifacts are irregular where some mobile -targets are elongated or shrunk depending on the motion-phase during imaging. The DIR-algorithms are successful in deforming the images of the mobile-targets to the images of the stationary-targets producing the CT-number values and length of the target for motion-amplitudes < 20 mm. Similarly in ACT, all DIR-algorithms produced the actual CT-number and length of the stationary-targets for motion-amplitudes < 15 mm. As stronger motion-artifacts are induced in HCT and ACT, DIR-algorithms fail to produce CT-values and shape of the stationary-targets and fast-demons-algorithm has worst performance. Conclusion: Most of DIR-algorithms produce the CT-number values and lengths of the stationary-targets in HCT and ACT images that has motion-artifacts induced by small motion-amplitudes. As motion-amplitudes increase, the DIR-algorithms fail to deform mobile

  16. Combinatorial algorithms for design of DNA arrays.

    PubMed

    Hannenhalli, Sridhar; Hubell, Earl; Lipshutz, Robert; Pevzner, Pavel A

    2002-01-01

    Optimal design of DNA arrays requires the development of algorithms with two-fold goals: reducing the effects caused by unintended illumination (border length minimization problem) and reducing the complexity of masks (mask decomposition problem). We describe algorithms that reduce the number of rectangles in mask decomposition by 20-30% as compared to a standard array design under the assumption that the arrangement of oligonucleotides on the array is fixed. This algorithm produces provably optimal solution for all studied real instances of array design. We also address the difficult problem of finding an arrangement which minimizes the border length and come up with a new idea of threading that significantly reduces the border length as compared to standard designs.

  17. Algorithms and sensitivity analyses for Stratospheric Aerosol and Gas Experiment II water vapor retrieval

    NASA Technical Reports Server (NTRS)

    Chu, W. P.; Chiou, E. W.; Larsen, J. C.; Thomason, L. W.; Rind, D.; Buglia, J. J.; Oltmans, S.; Mccormick, M. P.; Mcmaster, L. M.

    1993-01-01

    The operational inversion algorithm used for the retrieval of the water-vapor vertical profiles from the Stratospheric Aerosol and Gas Experiment II (SAGE II) occultation data is presented. Unlike the algorithm used for the retrieval of aerosol, O3, and NO2, the water-vapor retrieval algorithm accounts for the nonlinear relationship between the concentration versus the broad-band absorption characteristics of water vapor. Problems related to the accuracy of the computational scheme, the accuracy of the removal of other interfering species, and the expected uncertainty of the retrieved profile are examined. Results are presented on the error analysis of the SAGE II water vapor retrieval, indicating that the SAGE II instrument produced good quality water vapor data.

  18. The Orthogonally Partitioned EM Algorithm: Extending the EM Algorithm for Algorithmic Stability and Bias Correction Due to Imperfect Data.

    PubMed

    Regier, Michael D; Moodie, Erica E M

    2016-05-01

    We propose an extension of the EM algorithm that exploits the common assumption of unique parameterization, corrects for biases due to missing data and measurement error, converges for the specified model when standard implementation of the EM algorithm has a low probability of convergence, and reduces a potentially complex algorithm into a sequence of smaller, simpler, self-contained EM algorithms. We use the theory surrounding the EM algorithm to derive the theoretical results of our proposal, showing that an optimal solution over the parameter space is obtained. A simulation study is used to explore the finite sample properties of the proposed extension when there is missing data and measurement error. We observe that partitioning the EM algorithm into simpler steps may provide better bias reduction in the estimation of model parameters. The ability to breakdown a complicated problem in to a series of simpler, more accessible problems will permit a broader implementation of the EM algorithm, permit the use of software packages that now implement and/or automate the EM algorithm, and make the EM algorithm more accessible to a wider and more general audience.

  19. A discrete artificial bee colony algorithm for detecting transcription factor binding sites in DNA sequences.

    PubMed

    Karaboga, D; Aslan, S

    2016-04-27

    The great majority of biological sequences share significant similarity with other sequences as a result of evolutionary processes, and identifying these sequence similarities is one of the most challenging problems in bioinformatics. In this paper, we present a discrete artificial bee colony (ABC) algorithm, which is inspired by the intelligent foraging behavior of real honey bees, for the detection of highly conserved residue patterns or motifs within sequences. Experimental studies on three different data sets showed that the proposed discrete model, by adhering to the fundamental scheme of the ABC algorithm, produced competitive or better results than other metaheuristic motif discovery techniques.

  20. A novel harmony search-K means hybrid algorithm for clustering gene expression data

    PubMed Central

    Nazeer, KA Abdul; Sebastian, MP; Kumar, SD Madhu

    2013-01-01

    Recent progress in bioinformatics research has led to the accumulation of huge quantities of biological data at various data sources. The DNA microarray technology makes it possible to simultaneously analyze large number of genes across different samples. Clustering of microarray data can reveal the hidden gene expression patterns from large quantities of expression data that in turn offers tremendous possibilities in functional genomics, comparative genomics, disease diagnosis and drug development. The k- ¬means clustering algorithm is widely used for many practical applications. But the original k-¬means algorithm has several drawbacks. It is computationally expensive and generates locally optimal solutions based on the random choice of the initial centroids. Several methods have been proposed in the literature for improving the performance of the k-¬means algorithm. A meta-heuristic optimization algorithm named harmony search helps find out near-global optimal solutions by searching the entire solution space. Low clustering accuracy of the existing algorithms limits their use in many crucial applications of life sciences. In this paper we propose a novel Harmony Search-K means Hybrid (HSKH) algorithm for clustering the gene expression data. Experimental results show that the proposed algorithm produces clusters with better accuracy in comparison with the existing algorithms. PMID:23390351

  1. A novel harmony search-K means hybrid algorithm for clustering gene expression data.

    PubMed

    Nazeer, Ka Abdul; Sebastian, Mp; Kumar, Sd Madhu

    2013-01-01

    Recent progress in bioinformatics research has led to the accumulation of huge quantities of biological data at various data sources. The DNA microarray technology makes it possible to simultaneously analyze large number of genes across different samples. Clustering of microarray data can reveal the hidden gene expression patterns from large quantities of expression data that in turn offers tremendous possibilities in functional genomics, comparative genomics, disease diagnosis and drug development. The k- ¬means clustering algorithm is widely used for many practical applications. But the original k-¬means algorithm has several drawbacks. It is computationally expensive and generates locally optimal solutions based on the random choice of the initial centroids. Several methods have been proposed in the literature for improving the performance of the k-¬means algorithm. A meta-heuristic optimization algorithm named harmony search helps find out near-global optimal solutions by searching the entire solution space. Low clustering accuracy of the existing algorithms limits their use in many crucial applications of life sciences. In this paper we propose a novel Harmony Search-K means Hybrid (HSKH) algorithm for clustering the gene expression data. Experimental results show that the proposed algorithm produces clusters with better accuracy in comparison with the existing algorithms.

  2. Comparison of Snow Mass Estimates from a Prototype Passive Microwave Snow Algorithm, a Revised Algorithm and a Snow Depth Climatology

    NASA Technical Reports Server (NTRS)

    Foster, J. L.; Chang, A. T. C.; Hall, D. K.

    1997-01-01

    While it is recognized that no single snow algorithm is capable of producing accurate global estimates of snow depth, for research purposes it is useful to test an algorithm's performance in different climatic areas in order to see how it responds to a variety of snow conditions. This study is one of the first to develop separate passive microwave snow algorithms for North America and Eurasia by including parameters that consider the effects of variations in forest cover and crystal size on microwave brightness temperature. A new algorithm (GSFC 1996) is compared to a prototype algorithm (Chang et al., 1987) and to a snow depth climatology (SDC), which for this study is considered to be a standard reference or baseline. It is shown that the GSFC 1996 algorithm compares much more favorably to the SDC than does the Chang et al. (1987) algorithm. For example, in North America in February there is a 15% difference between the GSFC 198-96 Algorithm and the SDC, but with the Chang et al. (1987) algorithm the difference is greater than 50%. In Eurasia, also in February, there is only a 1.3% difference between the GSFC 1996 algorithm and the SDC, whereas with the Chang et al. (1987) algorithm the difference is about 20%. As expected, differences tend to be less when the snow cover extent is greater, particularly for Eurasia. The GSFC 1996 algorithm performs better in North America in each month than dose the Chang et al. (1987) algorithm. This is also the case in Eurasia, except in April and May when the Chang et al.(1987) algorithms is in closer accord to the SDC than is GSFC 1996 algorithm.

  3. Mathematical algorithm for the automatic recognition of intestinal parasites.

    PubMed

    Alva, Alicia; Cangalaya, Carla; Quiliano, Miguel; Krebs, Casey; Gilman, Robert H; Sheen, Patricia; Zimic, Mirko

    2017-01-01

    Parasitic infections are generally diagnosed by professionals trained to recognize the morphological characteristics of the eggs in microscopic images of fecal smears. However, this laboratory diagnosis requires medical specialists which are lacking in many of the areas where these infections are most prevalent. In response to this public health issue, we developed a software based on pattern recognition analysis from microscopi digital images of fecal smears, capable of automatically recognizing and diagnosing common human intestinal parasites. To this end, we selected 229, 124, 217, and 229 objects from microscopic images of fecal smears positive for Taenia sp., Trichuris trichiura, Diphyllobothrium latum, and Fasciola hepatica, respectively. Representative photographs were selected by a parasitologist. We then implemented our algorithm in the open source program SCILAB. The algorithm processes the image by first converting to gray-scale, then applies a fourteen step filtering process, and produces a skeletonized and tri-colored image. The features extracted fall into two general categories: geometric characteristics and brightness descriptions. Individual characteristics were quantified and evaluated with a logistic regression to model their ability to correctly identify each parasite separately. Subsequently, all algorithms were evaluated for false positive cross reactivity with the other parasites studied, excepting Taenia sp. which shares very few morphological characteristics with the others. The principal result showed that our algorithm reached sensitivities between 99.10%-100% and specificities between 98.13%- 98.38% to detect each parasite separately. We did not find any cross-positivity in the algorithms for the three parasites evaluated. In conclusion, the results demonstrated the capacity of our computer algorithm to automatically recognize and diagnose Taenia sp., Trichuris trichiura, Diphyllobothrium latum, and Fasciola hepatica with a high

  4. Mathematical algorithm for the automatic recognition of intestinal parasites

    PubMed Central

    Alva, Alicia; Cangalaya, Carla; Quiliano, Miguel; Krebs, Casey; Gilman, Robert H.; Sheen, Patricia; Zimic, Mirko

    2017-01-01

    Parasitic infections are generally diagnosed by professionals trained to recognize the morphological characteristics of the eggs in microscopic images of fecal smears. However, this laboratory diagnosis requires medical specialists which are lacking in many of the areas where these infections are most prevalent. In response to this public health issue, we developed a software based on pattern recognition analysis from microscopi digital images of fecal smears, capable of automatically recognizing and diagnosing common human intestinal parasites. To this end, we selected 229, 124, 217, and 229 objects from microscopic images of fecal smears positive for Taenia sp., Trichuris trichiura, Diphyllobothrium latum, and Fasciola hepatica, respectively. Representative photographs were selected by a parasitologist. We then implemented our algorithm in the open source program SCILAB. The algorithm processes the image by first converting to gray-scale, then applies a fourteen step filtering process, and produces a skeletonized and tri-colored image. The features extracted fall into two general categories: geometric characteristics and brightness descriptions. Individual characteristics were quantified and evaluated with a logistic regression to model their ability to correctly identify each parasite separately. Subsequently, all algorithms were evaluated for false positive cross reactivity with the other parasites studied, excepting Taenia sp. which shares very few morphological characteristics with the others. The principal result showed that our algorithm reached sensitivities between 99.10%-100% and specificities between 98.13%- 98.38% to detect each parasite separately. We did not find any cross-positivity in the algorithms for the three parasites evaluated. In conclusion, the results demonstrated the capacity of our computer algorithm to automatically recognize and diagnose Taenia sp., Trichuris trichiura, Diphyllobothrium latum, and Fasciola hepatica with a high

  5. An algorithm for solving the system-level problem in multilevel optimization

    NASA Technical Reports Server (NTRS)

    Balling, R. J.; Sobieszczanski-Sobieski, J.

    1994-01-01

    A multilevel optimization approach which is applicable to nonhierarchic coupled systems is presented. The approach includes a general treatment of design (or behavior) constraints and coupling constraints at the discipline level through the use of norms. Three different types of norms are examined: the max norm, the Kreisselmeier-Steinhauser (KS) norm, and the 1(sub p) norm. The max norm is recommended. The approach is demonstrated on a class of hub frame structures which simulate multidisciplinary systems. The max norm is shown to produce system-level constraint functions which are non-smooth. A cutting-plane algorithm is presented which adequately deals with the resulting corners in the constraint functions. The algorithm is tested on hub frames with increasing number of members (which simulate disciplines), and the results are summarized.

  6. Speedup of minimum discontinuity phase unwrapping algorithm with a reference phase distribution

    NASA Astrophysics Data System (ADS)

    Liu, Yihang; Han, Yu; Li, Fengjiao; Zhang, Qican

    2018-06-01

    In three-dimensional (3D) shape measurement based on phase analysis, the phase analysis process usually produces a wrapped phase map ranging from - π to π with some 2 π discontinuities, and thus a phase unwrapping algorithm is necessary to recover the continuous and nature phase map from which 3D height distribution can be restored. Usually, the minimum discontinuity phase unwrapping algorithm can be used to solve many different kinds of phase unwrapping problems, but its main drawback is that it requires a large amount of computations and has low efficiency in searching for the improving loop within the phase's discontinuity area. To overcome this drawback, an improvement to speedup of the minimum discontinuity phase unwrapping algorithm by using the phase distribution on reference plane is proposed. In this improved algorithm, before the minimum discontinuity phase unwrapping algorithm is carried out to unwrap phase, an integer number K was calculated from the ratio of the wrapped phase to the nature phase on a reference plane. And then the jump counts of the unwrapped phase can be reduced by adding 2K π, so the efficiency of the minimum discontinuity phase unwrapping algorithm is significantly improved. Both simulated and experimental data results verify the feasibility of the proposed improved algorithm, and both of them clearly show that the algorithm works very well and has high efficiency.

  7. Iterative Assessment of Statistically-Oriented and Standard Algorithms for Determining Muscle Onset with Intramuscular Electromyography.

    PubMed

    Tenan, Matthew S; Tweedell, Andrew J; Haynes, Courtney A

    2017-12-01

    The onset of muscle activity, as measured by electromyography (EMG), is a commonly applied metric in biomechanics. Intramuscular EMG is often used to examine deep musculature and there are currently no studies examining the effectiveness of algorithms for intramuscular EMG onset. The present study examines standard surface EMG onset algorithms (linear envelope, Teager-Kaiser Energy Operator, and sample entropy) and novel algorithms (time series mean-variance analysis, sequential/batch processing with parametric and nonparametric methods, and Bayesian changepoint analysis). Thirteen male and 5 female subjects had intramuscular EMG collected during isolated biceps brachii and vastus lateralis contractions, resulting in 103 trials. EMG onset was visually determined twice by 3 blinded reviewers. Since the reliability of visual onset was high (ICC (1,1) : 0.92), the mean of the 6 visual assessments was contrasted with the algorithmic approaches. Poorly performing algorithms were stepwise eliminated via (1) root mean square error analysis, (2) algorithm failure to identify onset/premature onset, (3) linear regression analysis, and (4) Bland-Altman plots. The top performing algorithms were all based on Bayesian changepoint analysis of rectified EMG and were statistically indistinguishable from visual analysis. Bayesian changepoint analysis has the potential to produce more reliable, accurate, and objective intramuscular EMG onset results than standard methodologies.

  8. Contextual classification of multispectral image data: Approximate algorithm

    NASA Technical Reports Server (NTRS)

    Tilton, J. C. (Principal Investigator)

    1980-01-01

    An approximation to a classification algorithm incorporating spatial context information in a general, statistical manner is presented which is computationally less intensive. Classifications that are nearly as accurate are produced.

  9. Cloud Computing Security Model with Combination of Data Encryption Standard Algorithm (DES) and Least Significant Bit (LSB)

    NASA Astrophysics Data System (ADS)

    Basri, M.; Mawengkang, H.; Zamzami, E. M.

    2018-03-01

    Limitations of storage sources is one option to switch to cloud storage. Confidentiality and security of data stored on the cloud is very important. To keep up the confidentiality and security of such data can be done one of them by using cryptography techniques. Data Encryption Standard (DES) is one of the block cipher algorithms used as standard symmetric encryption algorithm. This DES will produce 8 blocks of ciphers combined into one ciphertext, but the ciphertext are weak against brute force attacks. Therefore, the last 8 block cipher will be converted into 8 random images using Least Significant Bit (LSB) algorithm which later draws the result of cipher of DES algorithm to be merged into one.

  10. Study of image matching algorithm and sub-pixel fitting algorithm in target tracking

    NASA Astrophysics Data System (ADS)

    Yang, Ming-dong; Jia, Jianjun; Qiang, Jia; Wang, Jian-yu

    2015-03-01

    Image correlation matching is a tracking method that searched a region most approximate to the target template based on the correlation measure between two images. Because there is no need to segment the image, and the computation of this method is little. Image correlation matching is a basic method of target tracking. This paper mainly studies the image matching algorithm of gray scale image, which precision is at sub-pixel level. The matching algorithm used in this paper is SAD (Sum of Absolute Difference) method. This method excels in real-time systems because of its low computation complexity. The SAD method is introduced firstly and the most frequently used sub-pixel fitting algorithms are introduced at the meantime. These fitting algorithms can't be used in real-time systems because they are too complex. However, target tracking often requires high real-time performance, we put forward a fitting algorithm named paraboloidal fitting algorithm based on the consideration above, this algorithm is simple and realized easily in real-time system. The result of this algorithm is compared with that of surface fitting algorithm through image matching simulation. By comparison, the precision difference between these two algorithms is little, it's less than 0.01pixel. In order to research the influence of target rotation on precision of image matching, the experiment of camera rotation was carried on. The detector used in the camera is a CMOS detector. It is fixed to an arc pendulum table, take pictures when the camera rotated different angles. Choose a subarea in the original picture as the template, and search the best matching spot using image matching algorithm mentioned above. The result shows that the matching error is bigger when the target rotation angle is larger. It's an approximate linear relation. Finally, the influence of noise on matching precision was researched. Gaussian noise and pepper and salt noise were added in the image respectively, and the image

  11. Optimization of fuels from waste composition with application of genetic algorithm.

    PubMed

    Małgorzata, Wzorek

    2014-05-01

    The objective of this article is to elaborate a method to optimize the composition of the fuels from sewage sludge (PBS fuel - fuel based on sewage sludge and coal slime, PBM fuel - fuel based on sewage sludge and meat and bone meal, PBT fuel - fuel based on sewage sludge and sawdust). As a tool for an optimization procedure, the use of a genetic algorithm is proposed. The optimization task involves the maximization of mass fraction of sewage sludge in a fuel developed on the basis of quality-based criteria for the use as an alternative fuel used by the cement industry. The selection criteria of fuels composition concerned such parameters as: calorific value, content of chlorine, sulphur and heavy metals. Mathematical descriptions of fuel compositions and general forms of the genetic algorithm, as well as the obtained optimization results are presented. The results of this study indicate that the proposed genetic algorithm offers an optimization tool, which could be useful in the determination of the composition of fuels that are produced from waste.

  12. Restarting and recentering genetic algorithm variations for DNA fragment assembly: The necessity of a multi-strategy approach.

    PubMed

    Hughes, James Alexander; Houghten, Sheridan; Ashlock, Daniel

    2016-12-01

    DNA Fragment assembly - an NP-Hard problem - is one of the major steps in of DNA sequencing. Multiple strategies have been used for this problem, including greedy graph-based algorithms, deBruijn graphs, and the overlap-layout-consensus approach. This study focuses on the overlap-layout-consensus approach. Heuristics and computational intelligence methods are combined to exploit their respective benefits. These algorithm combinations were able to produce high quality results surpassing the best results obtained by a number of competitive algorithms specially designed and tuned for this problem on thirteen of sixteen popular benchmarks. This work also reinforces the necessity of using multiple search strategies as it is clearly observed that algorithm performance is dependent on problem instance; without a deeper look into many searches, top solutions could be missed entirely. Copyright © 2016. Published by Elsevier Ireland Ltd.

  13. Planning fuel-conservative descents with or without time constraints using a small programmable calculator: Algorithm development and flight test results

    NASA Technical Reports Server (NTRS)

    Knox, C. E.

    1983-01-01

    A simplified flight-management descent algorithm, programmed on a small programmable calculator, was developed and flight tested. It was designed to aid the pilot in planning and executing a fuel-conservative descent to arrive at a metering fix at a time designated by the air traffic control system. The algorithm may also be used for planning fuel-conservative descents when time is not a consideration. The descent path was calculated for a constant Mach/airspeed schedule from linear approximations of airplane performance with considerations given for gross weight, wind, and nonstandard temperature effects. The flight-management descent algorithm is described. The results of flight tests flown with a T-39A (Sabreliner) airplane are presented.

  14. The global Minmax k-means algorithm.

    PubMed

    Wang, Xiaoyan; Bai, Yanping

    2016-01-01

    The global k -means algorithm is an incremental approach to clustering that dynamically adds one cluster center at a time through a deterministic global search procedure from suitable initial positions, and employs k -means to minimize the sum of the intra-cluster variances. However the global k -means algorithm sometimes results singleton clusters and the initial positions sometimes are bad, after a bad initialization, poor local optimal can be easily obtained by k -means algorithm. In this paper, we modified the global k -means algorithm to eliminate the singleton clusters at first, and then we apply MinMax k -means clustering error method to global k -means algorithm to overcome the effect of bad initialization, proposed the global Minmax k -means algorithm. The proposed clustering method is tested on some popular data sets and compared to the k -means algorithm, the global k -means algorithm and the MinMax k -means algorithm. The experiment results show our proposed algorithm outperforms other algorithms mentioned in the paper.

  15. A globally convergent Lagrange and barrier function iterative algorithm for the traveling salesman problem.

    PubMed

    Dang, C; Xu, L

    2001-03-01

    In this paper a globally convergent Lagrange and barrier function iterative algorithm is proposed for approximating a solution of the traveling salesman problem. The algorithm employs an entropy-type barrier function to deal with nonnegativity constraints and Lagrange multipliers to handle linear equality constraints, and attempts to produce a solution of high quality by generating a minimum point of a barrier problem for a sequence of descending values of the barrier parameter. For any given value of the barrier parameter, the algorithm searches for a minimum point of the barrier problem in a feasible descent direction, which has a desired property that the nonnegativity constraints are always satisfied automatically if the step length is a number between zero and one. At each iteration the feasible descent direction is found by updating Lagrange multipliers with a globally convergent iterative procedure. For any given value of the barrier parameter, the algorithm converges to a stationary point of the barrier problem without any condition on the objective function. Theoretical and numerical results show that the algorithm seems more effective and efficient than the softassign algorithm.

  16. A community detection algorithm using network topologies and rule-based hierarchical arc-merging strategies

    PubMed Central

    2017-01-01

    The authors use four criteria to examine a novel community detection algorithm: (a) effectiveness in terms of producing high values of normalized mutual information (NMI) and modularity, using well-known social networks for testing; (b) examination, meaning the ability to examine mitigating resolution limit problems using NMI values and synthetic networks; (c) correctness, meaning the ability to identify useful community structure results in terms of NMI values and Lancichinetti-Fortunato-Radicchi (LFR) benchmark networks; and (d) scalability, or the ability to produce comparable modularity values with fast execution times when working with large-scale real-world networks. In addition to describing a simple hierarchical arc-merging (HAM) algorithm that uses network topology information, we introduce rule-based arc-merging strategies for identifying community structures. Five well-studied social network datasets and eight sets of LFR benchmark networks were employed to validate the correctness of a ground-truth community, eight large-scale real-world complex networks were used to measure its efficiency, and two synthetic networks were used to determine its susceptibility to two resolution limit problems. Our experimental results indicate that the proposed HAM algorithm exhibited satisfactory performance efficiency, and that HAM-identified and ground-truth communities were comparable in terms of social and LFR benchmark networks, while mitigating resolution limit problems. PMID:29121100

  17. Automatic Data Filter Customization Using a Genetic Algorithm

    NASA Technical Reports Server (NTRS)

    Mandrake, Lukas

    2013-01-01

    This work predicts whether a retrieval algorithm will usefully determine CO2 concentration from an input spectrum of GOSAT (Greenhouse Gases Observing Satellite). This was done to eliminate needless runtime on atmospheric soundings that would never yield useful results. A space of 50 dimensions was examined for predictive power on the final CO2 results. Retrieval algorithms are frequently expensive to run, and wasted effort defeats requirements and expends needless resources. This algorithm could be used to help predict and filter unneeded runs in any computationally expensive regime. Traditional methods such as the Fischer discriminant analysis and decision trees can attempt to predict whether a sounding will be properly processed. However, this work sought to detect a subsection of the dimensional space that can be simply filtered out to eliminate unwanted runs. LDAs (linear discriminant analyses) and other systems examine the entire data and judge a "best fit," giving equal weight to complex and problematic regions as well as simple, clear-cut regions. In this implementation, a genetic space of "left" and "right" thresholds outside of which all data are rejected was defined. These left/right pairs are created for each of the 50 input dimensions. A genetic algorithm then runs through countless potential filter settings using a JPL computer cluster, optimizing the tossed-out data s yield (proper vs. improper run removal) and number of points tossed. This solution is robust to an arbitrary decision boundary within the data and avoids the global optimization problem of whole-dataset fitting using LDA or decision trees. It filters out runs that would not have produced useful CO2 values to save needless computation. This would be an algorithmic preprocessing improvement to any computationally expensive system.

  18. Evidence-based algorithm for diagnosis and assessment in psoriatic arthritis: results by Italian DElphi in psoriatic Arthritis (IDEA).

    PubMed

    Lapadula, G; Marchesoni, A; Salaffi, F; Ramonda, R; Salvarani, C; Punzi, L; Costa, L; Caso, F; Simone, D; Baiocchi, G; Scioscia, C; Di Carlo, M; Scarpa, R; Ferraccioli, G

    2016-12-16

    Psoriatic arthritis (PsA) is a chronic inflammatory disease involving skin, peripheral joints, entheses, and axial skeleton. The disease is frequently associated with extrarticular manifestations (EAMs) and comorbidities. In order to create a protocol for PsA diagnosis and global assessment of patients with an algorithm based on anamnestic, clinical, laboratory and imaging procedures, we established a DElphi study on a national scale, named Italian DElphi in psoriatic Arthritis (IDEA). After a literature search, a Delphi poll, involving 52 rheumatologists, was performed. On the basis of the literature search, 202 potential items were identified. The steering committee planned at least two Delphi rounds. In the first Delphi round, the experts judged each of the 202 items using a score ranging from 1 to 9 based on its increasing clinical relevance. The questions posed to experts were How relevant is this procedure/observation/sign/symptom for assessment of a psoriatic arthritis patient? Proposals of additional items, not included in the questionnaire, were also encouraged. The results of the poll were discussed by the Steering Committee, which evaluated the necessity for removing selected procedures or adding additional ones, according to criteria of clinical appropriateness and sustainability. A total of 43 recommended diagnosis and assessment procedures, recognized as items, were derived by combination of the Delphi survey and two National Expert Meetings, and grouped in different areas. Favourable opinion was reached in 100% of cases for several aspects covering the following areas: medical (familial and personal) history, physical evaluation, imaging tool, second level laboratory tests, disease activity measurement and extrarticular manifestations. After performing PsA diagnosis, identification of specific disease activity scores and clinimetric approaches were suggested for assessing the different clinical subsets. Further, results showed the need for

  19. A plant cell division algorithm based on cell biomechanics and ellipse-fitting

    PubMed Central

    Abera, Metadel K.; Verboven, Pieter; Defraeye, Thijs; Fanta, Solomon Workneh; Hertog, Maarten L. A. T. M.; Carmeliet, Jan; Nicolai, Bart M.

    2014-01-01

    Background and Aims The importance of cell division models in cellular pattern studies has been acknowledged since the 19th century. Most of the available models developed to date are limited to symmetric cell division with isotropic growth. Often, the actual growth of the cell wall is either not considered or is updated intermittently on a separate time scale to the mechanics. This study presents a generic algorithm that accounts for both symmetrically and asymmetrically dividing cells with isotropic and anisotropic growth. Actual growth of the cell wall is simulated simultaneously with the mechanics. Methods The cell is considered as a closed, thin-walled structure, maintained in tension by turgor pressure. The cell walls are represented as linear elastic elements that obey Hooke's law. Cell expansion is induced by turgor pressure acting on the yielding cell-wall material. A system of differential equations for the positions and velocities of the cell vertices as well as for the actual growth of the cell wall is established. Readiness to divide is determined based on cell size. An ellipse-fitting algorithm is used to determine the position and orientation of the dividing wall. The cell vertices, walls and cell connectivity are then updated and cell expansion resumes. Comparisons are made with experimental data from the literature. Key Results The generic plant cell division algorithm has been implemented successfully. It can handle both symmetrically and asymmetrically dividing cells coupled with isotropic and anisotropic growth modes. Development of the algorithm highlighted the importance of ellipse-fitting to produce randomness (biological variability) even in symmetrically dividing cells. Unlike previous models, a differential equation is formulated for the resting length of the cell wall to simulate actual biological growth and is solved simultaneously with the position and velocity of the vertices. Conclusions The algorithm presented can produce different

  20. Towards Enhancement of Performance of K-Means Clustering Using Nature-Inspired Optimization Algorithms

    PubMed Central

    Deb, Suash; Yang, Xin-She

    2014-01-01

    Traditional K-means clustering algorithms have the drawback of getting stuck at local optima that depend on the random values of initial centroids. Optimization algorithms have their advantages in guiding iterative computation to search for global optima while avoiding local optima. The algorithms help speed up the clustering process by converging into a global optimum early with multiple search agents in action. Inspired by nature, some contemporary optimization algorithms which include Ant, Bat, Cuckoo, Firefly, and Wolf search algorithms mimic the swarming behavior allowing them to cooperatively steer towards an optimal objective within a reasonable time. It is known that these so-called nature-inspired optimization algorithms have their own characteristics as well as pros and cons in different applications. When these algorithms are combined with K-means clustering mechanism for the sake of enhancing its clustering quality by avoiding local optima and finding global optima, the new hybrids are anticipated to produce unprecedented performance. In this paper, we report the results of our evaluation experiments on the integration of nature-inspired optimization methods into K-means algorithms. In addition to the standard evaluation metrics in evaluating clustering quality, the extended K-means algorithms that are empowered by nature-inspired optimization methods are applied on image segmentation as a case study of application scenario. PMID:25202730

  1. Tomographic retrievals of ozone with the OMPS Limb Profiler: algorithm description and preliminary results

    NASA Astrophysics Data System (ADS)

    Zawada, Daniel J.; Rieger, Landon A.; Bourassa, Adam E.; Degenstein, Douglas A.

    2018-04-01

    Measurements of limb-scattered sunlight from the Ozone Mapping and Profiler Suite Limb Profiler (OMPS-LP) can be used to obtain vertical profiles of ozone in the stratosphere. In this paper we describe a two-dimensional, or tomographic, retrieval algorithm for OMPS-LP where variations are retrieved simultaneously in altitude and the along-orbital-track dimension. The algorithm has been applied to measurements from the center slit for the full OMPS-LP mission to create the publicly available University of Saskatchewan (USask) OMPS-LP 2D v1.0.2 dataset. Tropical ozone anomalies are compared with measurements from the Microwave Limb Sounder (MLS), where differences are less than 5 % of the mean ozone value for the majority of the stratosphere. Examples of near-coincident measurements with MLS are also shown, and agreement at the 5 % level is observed for the majority of the stratosphere. Both simulated retrievals and coincident comparisons with MLS are shown at the edge of the polar vortex, comparing the results to a traditional one-dimensional retrieval. The one-dimensional retrieval is shown to consistently overestimate the amount of ozone in areas of large horizontal gradients relative to both MLS and the two-dimensional retrieval.

  2. A Super-Resolution Algorithm for Enhancement of FLASH LIDAR Data: Flight Test Results

    NASA Technical Reports Server (NTRS)

    Bulyshev, Alexander; Amzajerdian, Farzin; Roback, Eric; Reisse Robert

    2014-01-01

    This paper describes the results of a 3D super-resolution algorithm applied to the range data obtained from a recent Flash Lidar helicopter flight test. The flight test was conducted by the NASA's Autonomous Landing and Hazard Avoidance Technology (ALHAT) project over a simulated lunar terrain facility at NASA Kennedy Space Center. ALHAT is developing the technology for safe autonomous landing on the surface of celestial bodies: Moon, Mars, asteroids. One of the test objectives was to verify the ability of 3D super-resolution technique to generate high resolution digital elevation models (DEMs) and to determine time resolved relative positions and orientations of the vehicle. 3D super-resolution algorithm was developed earlier and tested in computational modeling, and laboratory experiments, and in a few dynamic experiments using a moving truck. Prior to the helicopter flight test campaign, a 100mX100m hazard field was constructed having most of the relevant extraterrestrial hazard: slopes, rocks, and craters with different sizes. Data were collected during the flight and then processed by the super-resolution code. The detailed DEM of the hazard field was constructed using independent measurement to be used for comparison. ALHAT navigation system data were used to verify abilities of super-resolution method to provide accurate relative navigation information. Namely, the 6 degree of freedom state vector of the instrument as a function of time was restored from super-resolution data. The results of comparisons show that the super-resolution method can construct high quality DEMs and allows for identifying hazards like rocks and craters within the accordance of ALHAT requirements.

  3. A super-resolution algorithm for enhancement of flash lidar data: flight test results

    NASA Astrophysics Data System (ADS)

    Bulyshev, Alexander; Amzajerdian, Farzin; Roback, Eric; Reisse, Robert

    2013-03-01

    This paper describes the results of a 3D super-resolution algorithm applied to the range data obtained from a recent Flash Lidar helicopter flight test. The flight test was conducted by the NASA's Autonomous Landing and Hazard Avoidance Technology (ALHAT) project over a simulated lunar terrain facility at NASA Kennedy Space Center. ALHAT is developing the technology for safe autonomous landing on the surface of celestial bodies: Moon, Mars, asteroids. One of the test objectives was to verify the ability of 3D super-resolution technique to generate high resolution digital elevation models (DEMs) and to determine time resolved relative positions and orientations of the vehicle. 3D super-resolution algorithm was developed earlier and tested in computational modeling, and laboratory experiments, and in a few dynamic experiments using a moving truck. Prior to the helicopter flight test campaign, a 100mX100m hazard field was constructed having most of the relevant extraterrestrial hazard: slopes, rocks, and craters with different sizes. Data were collected during the flight and then processed by the super-resolution code. The detailed DEM of the hazard field was constructed using independent measurement to be used for comparison. ALHAT navigation system data were used to verify abilities of super-resolution method to provide accurate relative navigation information. Namely, the 6 degree of freedom state vector of the instrument as a function of time was restored from super-resolution data. The results of comparisons show that the super-resolution method can construct high quality DEMs and allows for identifying hazards like rocks and craters within the accordance of ALHAT requirements.

  4. Scheduling periodic jobs using imprecise results

    NASA Technical Reports Server (NTRS)

    Chung, Jen-Yao; Liu, Jane W. S.; Lin, Kwei-Jay

    1987-01-01

    One approach to avoid timing faults in hard, real-time systems is to make available intermediate, imprecise results produced by real-time processes. When a result of the desired quality cannot be produced in time, an imprecise result of acceptable quality produced before the deadline can be used. The problem of scheduling periodic jobs to meet deadlines on a system that provides the necessary programming language primitives and run-time support for processes to return imprecise results is discussed. Since the scheduler may choose to terminate a task before it is completed, causing it to produce an acceptable but imprecise result, the amount of processor time assigned to any task in a valid schedule can be less than the amount of time required to complete the task. A meaningful formulation of the scheduling problem must take into account the overall quality of the results. Depending on the different types of undesirable effects caused by errors, jobs are classified as type N or type C. For type N jobs, the effects of errors in results produced in different periods are not cumulative. A reasonable performance measure is the average error over all jobs. Three heuristic algorithms that lead to feasible schedules with small average errors are described. For type C jobs, the undesirable effects of errors produced in different periods are cumulative. Schedulability criteria of type C jobs are discussed.

  5. Improved Monkey-King Genetic Algorithm for Solving Large Winner Determination in Combinatorial Auction

    NASA Astrophysics Data System (ADS)

    Li, Yuzhong

    Using GA solve the winner determination problem (WDP) with large bids and items, run under different distribution, because the search space is large, constraint complex and it may easy to produce infeasible solution, would affect the efficiency and quality of algorithm. This paper present improved MKGA, including three operator: preprocessing, insert bid and exchange recombination, and use Monkey-king elite preservation strategy. Experimental results show that improved MKGA is better than SGA in population size and computation. The problem that traditional branch and bound algorithm hard to solve, improved MKGA can solve and achieve better effect.

  6. Genetic Algorithm for Optimization: Preprocessor and Algorithm

    NASA Technical Reports Server (NTRS)

    Sen, S. K.; Shaykhian, Gholam A.

    2006-01-01

    Genetic algorithm (GA) inspired by Darwin's theory of evolution and employed to solve optimization problems - unconstrained or constrained - uses an evolutionary process. A GA has several parameters such the population size, search space, crossover and mutation probabilities, and fitness criterion. These parameters are not universally known/determined a priori for all problems. Depending on the problem at hand, these parameters need to be decided such that the resulting GA performs the best. We present here a preprocessor that achieves just that, i.e., it determines, for a specified problem, the foregoing parameters so that the consequent GA is a best for the problem. We stress also the need for such a preprocessor both for quality (error) and for cost (complexity) to produce the solution. The preprocessor includes, as its first step, making use of all the information such as that of nature/character of the function/system, search space, physical/laboratory experimentation (if already done/available), and the physical environment. It also includes the information that can be generated through any means - deterministic/nondeterministic/graphics. Instead of attempting a solution of the problem straightway through a GA without having/using the information/knowledge of the character of the system, we would do consciously a much better job of producing a solution by using the information generated/created in the very first step of the preprocessor. We, therefore, unstintingly advocate the use of a preprocessor to solve a real-world optimization problem including NP-complete ones before using the statistically most appropriate GA. We also include such a GA for unconstrained function optimization problems.

  7. Parallel consistent labeling algorithms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Samal, A.; Henderson, T.

    Mackworth and Freuder have analyzed the time complexity of several constraint satisfaction algorithms. Mohr and Henderson have given new algorithms, AC-4 and PC-3, for arc and path consistency, respectively, and have shown that the arc consistency algorithm is optimal in time complexity and of the same order space complexity as the earlier algorithms. In this paper, they give parallel algorithms for solving node and arc consistency. They show that any parallel algorithm for enforcing arc consistency in the worst case must have O(na) sequential steps, where n is number of nodes, and a is the number of labels per node.more » They give several parallel algorithms to do arc consistency. It is also shown that they all have optimal time complexity. The results of running the parallel algorithms on a BBN Butterfly multiprocessor are also presented.« less

  8. Accelerating scientific computations with mixed precision algorithms

    NASA Astrophysics Data System (ADS)

    Baboulin, Marc; Buttari, Alfredo; Dongarra, Jack; Kurzak, Jakub; Langou, Julie; Langou, Julien; Luszczek, Piotr; Tomov, Stanimire

    2009-12-01

    factorization of the coefficient matrix using Gaussian elimination. First, the coefficient matrix A is factored into the product of a lower triangular matrix L and an upper triangular matrix U. Partial row pivoting is in general used to improve numerical stability resulting in a factorization PA=LU, where P is a permutation matrix. The solution for the system is achieved by first solving Ly=Pb (forward substitution) and then solving Ux=y (backward substitution). Due to round-off errors, the computed solution, x, carries a numerical error magnified by the condition number of the coefficient matrix A. In order to improve the computed solution, an iterative process can be applied, which produces a correction to the computed solution at each iteration, which then yields the method that is commonly known as the iterative refinement algorithm. Provided that the system is not too ill-conditioned, the algorithm produces a solution correct to the working precision. Running time: seconds/minutes

  9. Evaluating the utility of syndromic surveillance algorithms for screening to detect potentially clonal hospital infection outbreaks

    PubMed Central

    Talbot, Thomas R; Schaffner, William; Bloch, Karen C; Daniels, Titus L; Miller, Randolph A

    2011-01-01

    Objective The authors evaluated algorithms commonly used in syndromic surveillance for use as screening tools to detect potentially clonal outbreaks for review by infection control practitioners. Design Study phase 1 applied four aberrancy detection algorithms (CUSUM, EWMA, space-time scan statistic, and WSARE) to retrospective microbiologic culture data, producing a list of past candidate outbreak clusters. In phase 2, four infectious disease physicians categorized the phase 1 algorithm-identified clusters to ascertain algorithm performance. In phase 3, project members combined the algorithms to create a unified screening system and conducted a retrospective pilot evaluation. Measurements The study calculated recall and precision for each algorithm, and created precision-recall curves for various methods of combining the algorithms into a unified screening tool. Results Individual algorithm recall and precision ranged from 0.21 to 0.31 and from 0.053 to 0.29, respectively. Few candidate outbreak clusters were identified by more than one algorithm. The best method of combining the algorithms yielded an area under the precision-recall curve of 0.553. The phase 3 combined system detected all infection control-confirmed outbreaks during the retrospective evaluation period. Limitations Lack of phase 2 reviewers' agreement indicates that subjective expert review was an imperfect gold standard. Less conservative filtering of culture results and alternate parameter selection for each algorithm might have improved algorithm performance. Conclusion Hospital outbreak detection presents different challenges than traditional syndromic surveillance. Nevertheless, algorithms developed for syndromic surveillance have potential to form the basis of a combined system that might perform clinically useful hospital outbreak screening. PMID:21606134

  10. Research on registration algorithm for check seal verification

    NASA Astrophysics Data System (ADS)

    Wang, Shuang; Liu, Tiegen

    2008-03-01

    Nowadays seals play an important role in China. With the development of social economy, the traditional method of manual check seal identification can't meet the need s of banking transactions badly. This paper focus on pre-processing and registration algorithm for check seal verification using theory of image processing and pattern recognition. First of all, analyze the complex characteristics of check seals. To eliminate the difference of producing conditions and the disturbance caused by background and writing in check image, many methods are used in the pre-processing of check seal verification, such as color components transformation, linearity transform to gray-scale image, medium value filter, Otsu, close calculations and labeling algorithm of mathematical morphology. After the processes above, the good binary seal image can be obtained. On the basis of traditional registration algorithm, a double-level registration method including rough and precise registration method is proposed. The deflection angle of precise registration method can be precise to 0.1°. This paper introduces the concepts of difference inside and difference outside and use the percent of difference inside and difference outside to judge whether the seal is real or fake. The experimental results of a mass of check seals are satisfied. It shows that the methods and algorithmic presented have good robustness to noise sealing conditions and satisfactory tolerance of difference within class.

  11. Lidar detection algorithm for time and range anomalies.

    PubMed

    Ben-David, Avishai; Davidson, Charles E; Vanderbeek, Richard G

    2007-10-10

    A new detection algorithm for lidar applications has been developed. The detection is based on hyperspectral anomaly detection that is implemented for time anomaly where the question "is a target (aerosol cloud) present at range R within time t(1) to t(2)" is addressed, and for range anomaly where the question "is a target present at time t within ranges R(1) and R(2)" is addressed. A detection score significantly different in magnitude from the detection scores for background measurements suggests that an anomaly (interpreted as the presence of a target signal in space/time) exists. The algorithm employs an option for a preprocessing stage where undesired oscillations and artifacts are filtered out with a low-rank orthogonal projection technique. The filtering technique adaptively removes the one over range-squared dependence of the background contribution of the lidar signal and also aids visualization of features in the data when the signal-to-noise ratio is low. A Gaussian-mixture probability model for two hypotheses (anomaly present or absent) is computed with an expectation-maximization algorithm to produce a detection threshold and probabilities of detection and false alarm. Results of the algorithm for CO(2) lidar measurements of bioaerosol clouds Bacillus atrophaeus (formerly known as Bacillus subtilis niger, BG) and Pantoea agglomerans, Pa (formerly known as Erwinia herbicola, Eh) are shown and discussed.

  12. Motion Cueing Algorithm Development: New Motion Cueing Program Implementation and Tuning

    NASA Technical Reports Server (NTRS)

    Houck, Jacob A. (Technical Monitor); Telban, Robert J.; Cardullo, Frank M.; Kelly, Lon C.

    2005-01-01

    A computer program has been developed for the purpose of driving the NASA Langley Research Center Visual Motion Simulator (VMS). This program includes two new motion cueing algorithms, the optimal algorithm and the nonlinear algorithm. A general description of the program is given along with a description and flowcharts for each cueing algorithm, and also descriptions and flowcharts for subroutines used with the algorithms. Common block variable listings and a program listing are also provided. The new cueing algorithms have a nonlinear gain algorithm implemented that scales each aircraft degree-of-freedom input with a third-order polynomial. A description of the nonlinear gain algorithm is given along with past tuning experience and procedures for tuning the gain coefficient sets for each degree-of-freedom to produce the desired piloted performance. This algorithm tuning will be needed when the nonlinear motion cueing algorithm is implemented on a new motion system in the Cockpit Motion Facility (CMF) at the NASA Langley Research Center.

  13. Multi-sensor image fusion algorithm based on multi-objective particle swarm optimization algorithm

    NASA Astrophysics Data System (ADS)

    Xie, Xia-zhu; Xu, Ya-wei

    2017-11-01

    On the basis of DT-CWT (Dual-Tree Complex Wavelet Transform - DT-CWT) theory, an approach based on MOPSO (Multi-objective Particle Swarm Optimization Algorithm) was proposed to objectively choose the fused weights of low frequency sub-bands. High and low frequency sub-bands were produced by DT-CWT. Absolute value of coefficients was adopted as fusion rule to fuse high frequency sub-bands. Fusion weights in low frequency sub-bands were used as particles in MOPSO. Spatial Frequency and Average Gradient were adopted as two kinds of fitness functions in MOPSO. The experimental result shows that the proposed approach performances better than Average Fusion and fusion methods based on local variance and local energy respectively in brightness, clarity and quantitative evaluation which includes Entropy, Spatial Frequency, Average Gradient and QAB/F.

  14. Veggie ISS Validation Test Results and Produce Consumption

    NASA Technical Reports Server (NTRS)

    Massa, Gioia; Hummerick, Mary; Spencer, LaShelle; Smith, Trent

    2015-01-01

    The Veggie vegetable production system flew to the International Space Station (ISS) in the spring of 2014. The first set of plants, Outredgeous red romaine lettuce, was grown, harvested, frozen, and returned to Earth in October. Ground control and flight plant tissue was sub-sectioned for microbial analysis, anthocyanin antioxidant phenolic analysis, and elemental analysis. Microbial analysis was also performed on samples swabbed on orbit from plants, Veggie bellows, and plant pillow surfaces, on water samples, and on samples of roots, media, and wick material from two returned plant pillows. Microbial levels of plants were comparable to ground controls, with some differences in community composition. The range in aerobic bacterial plate counts between individual plants was much greater in the ground controls than in flight plants. No pathogens were found. Anthocyanin concentrations were the same between ground and flight plants, while antioxidant and phenolic levels were slightly higher in flight plants. Elements varied, but key target elements for astronaut nutrition were similar between ground and flight plants. Aerobic plate counts of the flight plant pillow components were significantly higher than ground controls. Surface swab samples showed low microbial counts, with most below detection limits. Flight plant microbial levels were less than bacterial guidelines set for non-thermostabalized food and near or below those for fungi. These guidelines are not for fresh produce but are the closest approximate standards. Forward work includes the development of standards for space-grown produce. A produce consumption strategy for Veggie on ISS includes pre-flight assessments of all crops to down select candidates, wiping flight-grown plants with sanitizing food wipes, and regular Veggie hardware cleaning and microbial monitoring. Produce then could be consumed by astronauts, however some plant material would be reserved and returned for analysis. Implementation of

  15. Modification of the random forest algorithm to avoid statistical dependence problems when classifying remote sensing imagery

    NASA Astrophysics Data System (ADS)

    Cánovas-García, Fulgencio; Alonso-Sarría, Francisco; Gomariz-Castillo, Francisco; Oñate-Valdivieso, Fernando

    2017-06-01

    Random forest is a classification technique widely used in remote sensing. One of its advantages is that it produces an estimation of classification accuracy based on the so called out-of-bag cross-validation method. It is usually assumed that such estimation is not biased and may be used instead of validation based on an external data-set or a cross-validation external to the algorithm. In this paper we show that this is not necessarily the case when classifying remote sensing imagery using training areas with several pixels or objects. According to our results, out-of-bag cross-validation clearly overestimates accuracy, both overall and per class. The reason is that, in a training patch, pixels or objects are not independent (from a statistical point of view) of each other; however, they are split by bootstrapping into in-bag and out-of-bag as if they were really independent. We believe that putting whole patch, rather than pixels/objects, in one or the other set would produce a less biased out-of-bag cross-validation. To deal with the problem, we propose a modification of the random forest algorithm to split training patches instead of the pixels (or objects) that compose them. This modified algorithm does not overestimate accuracy and has no lower predictive capability than the original. When its results are validated with an external data-set, the accuracy is not different from that obtained with the original algorithm. We analysed three remote sensing images with different classification approaches (pixel and object based); in the three cases reported, the modification we propose produces a less biased accuracy estimation.

  16. Development of microwave rainfall retrieval algorithm for climate applications

    NASA Astrophysics Data System (ADS)

    KIM, J. H.; Shin, D. B.

    2014-12-01

    With the accumulated satellite datasets for decades, it is possible that satellite-based data could contribute to sustained climate applications. Level-3 products from microwave sensors for climate applications can be obtained from several algorithms. For examples, the Microwave Emission brightness Temperature Histogram (METH) algorithm produces level-3 rainfalls directly, whereas the Goddard profiling (GPROF) algorithm first generates instantaneous rainfalls and then temporal and spatial averaging process leads to level-3 products. The rainfall algorithm developed in this study follows a similar approach to averaging instantaneous rainfalls. However, the algorithm is designed to produce instantaneous rainfalls at an optimal resolution showing reduced non-linearity in brightness temperature (TB)-rain rate(R) relations. It is found that the resolution tends to effectively utilize emission channels whose footprints are relatively larger than those of scattering channels. This algorithm is mainly composed of a-priori databases (DBs) and a Bayesian inversion module. The DB contains massive pairs of simulated microwave TBs and rain rates, obtained by WRF (version 3.4) and RTTOV (version 11.1) simulations. To improve the accuracy and efficiency of retrieval process, data mining technique is additionally considered. The entire DB is classified into eight types based on Köppen climate classification criteria using reanalysis data. Among these sub-DBs, only one sub-DB which presents the most similar physical characteristics is selected by considering the thermodynamics of input data. When the Bayesian inversion is applied to the selected DB, instantaneous rain rate with 6 hours interval is retrieved. The retrieved monthly mean rainfalls are statistically compared with CMAP and GPCP, respectively.

  17. Threshold matrix for digital halftoning by genetic algorithm optimization

    NASA Astrophysics Data System (ADS)

    Alander, Jarmo T.; Mantere, Timo J.; Pyylampi, Tero

    1998-10-01

    Digital halftoning is used both in low and high resolution high quality printing technologies. Our method is designed to be mainly used for low resolution ink jet marking machines to produce both gray tone and color images. The main problem with digital halftoning is pink noise caused by the human eye's visual transfer function. To compensate for this the random dot patterns used are optimized to contain more blue than pink noise. Several such dot pattern generator threshold matrices have been created automatically by using genetic algorithm optimization, a non-deterministic global optimization method imitating natural evolution and genetics. A hybrid of genetic algorithm with a search method based on local backtracking was developed together with several fitness functions evaluating dot patterns for rectangular grids. By modifying the fitness function, a family of dot generators results, each with its particular statistical features. Several versions of genetic algorithms, backtracking and fitness functions were tested to find a reasonable combination. The generated threshold matrices have been tested by simulating a set of test images using the Khoros image processing system. Even though the work was focused on developing low resolution marking technology, the resulting family of dot generators can be applied also in other halftoning application areas including high resolution printing technology.

  18. Multi Objective Optimization of Yarn Quality and Fibre Quality Using Evolutionary Algorithm

    NASA Astrophysics Data System (ADS)

    Ghosh, Anindya; Das, Subhasis; Banerjee, Debamalya

    2013-03-01

    The quality and cost of resulting yarn play a significant role to determine its end application. The challenging task of any spinner lies in producing a good quality yarn with added cost benefit. The present work does a multi-objective optimization on two objectives, viz. maximization of cotton yarn strength and minimization of raw material quality. The first objective function has been formulated based on the artificial neural network input-output relation between cotton fibre properties and yarn strength. The second objective function is formulated with the well known regression equation of spinning consistency index. It is obvious that these two objectives are conflicting in nature i.e. not a single combination of cotton fibre parameters does exist which produce maximum yarn strength and minimum cotton fibre quality simultaneously. Therefore, it has several optimal solutions from which a trade-off is needed depending upon the requirement of user. In this work, the optimal solutions are obtained with an elitist multi-objective evolutionary algorithm based on Non-dominated Sorting Genetic Algorithm II (NSGA-II). These optimum solutions may lead to the efficient exploitation of raw materials to produce better quality yarns at low costs.

  19. An algorithmic and information-theoretic approach to multimetric index construction

    USGS Publications Warehouse

    Schoolmaster, Donald R.; Grace, James B.; Schweiger, E. William; Guntenspergen, Glenn R.; Mitchell, Brian R.; Miller, Kathryn M.; Little, Amanda M.

    2013-01-01

    The use of multimetric indices (MMIs), such as the widely used index of biological integrity (IBI), to measure, track, summarize and infer the overall impact of human disturbance on biological communities has been steadily growing in recent years. Initially, MMIs were developed for aquatic communities using pre-selected biological metrics as indicators of system integrity. As interest in these bioassessment tools has grown, so have the types of biological systems to which they are applied. For many ecosystem types the appropriate biological metrics to use as measures of biological integrity are not known a priori. As a result, a variety of ad hoc protocols for selecting metrics empirically has developed. However, the assumptions made by proposed protocols have not be explicitly described or justified, causing many investigators to call for a clear, repeatable methodology for developing empirically derived metrics and indices that can be applied to any biological system. An issue of particular importance that has not been sufficiently addressed is the way that individual metrics combine to produce an MMI that is a sensitive composite indicator of human disturbance. In this paper, we present and demonstrate an algorithm for constructing MMIs given a set of candidate metrics and a measure of human disturbance. The algorithm uses each metric to inform a candidate MMI, and then uses information-theoretic principles to select MMIs that capture the information in the multidimensional system response from among possible MMIs. Such an approach can be used to create purely empirical (data-based) MMIs or can, optionally, be influenced by expert opinion or biological theory through the use of a weighting vector to create value-weighted MMIs. We demonstrate the algorithm with simulated data to demonstrate the predictive capacity of the final MMIs and with real data from wetlands from Acadia and Rocky Mountain National Parks. For the Acadia wetland data, the algorithm identified

  20. Planning fuel-conservative descents with or without time constraints using a small programmable calculator: algorithm development and flight test results

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Knox, C.E.

    A simplified flight-management descent algorithm, programmed on a small programmable calculator, was developed and flight tested. It was designed to aid the pilot in planning and executing a fuel-conservative descent to arrive at a metering fix at a time designated by the air traffic control system. The algorithm may also be used for planning fuel-conservative descents when time is not a consideration. The descent path was calculated for a constant Mach/airspeed schedule from linear approximations of airplane performance with considerations given for gross weight, wind, and nonstandard temperature effects. The flight-management descent algorithm is described. The results of flight testsmore » flown with a T-39A (Sabreliner) airplane are presented.« less

  1. Speed and convergence properties of gradient algorithms for optimization of IMRT.

    PubMed

    Zhang, Xiaodong; Liu, Helen; Wang, Xiaochun; Dong, Lei; Wu, Qiuwen; Mohan, Radhe

    2004-05-01

    Gradient algorithms are the most commonly employed search methods in the routine optimization of IMRT plans. It is well known that local minima can exist for dose-volume-based and biology-based objective functions. The purpose of this paper is to compare the relative speed of different gradient algorithms, to investigate the strategies for accelerating the optimization process, to assess the validity of these strategies, and to study the convergence properties of these algorithms for dose-volume and biological objective functions. With these aims in mind, we implemented Newton's, conjugate gradient (CG), and the steepest decent (SD) algorithms for dose-volume- and EUD-based objective functions. Our implementation of Newton's algorithm approximates the second derivative matrix (Hessian) by its diagonal. The standard SD algorithm and the CG algorithm with "line minimization" were also implemented. In addition, we investigated the use of a variation of the CG algorithm, called the "scaled conjugate gradient" (SCG) algorithm. To accelerate the optimization process, we investigated the validity of the use of a "hybrid optimization" strategy, in which approximations to calculated dose distributions are used during most of the iterations. Published studies have indicated that getting trapped in local minima is not a significant problem. To investigate this issue further, we first obtained, by trial and error, and starting with uniform intensity distributions, the parameters of the dose-volume- or EUD-based objective functions which produced IMRT plans that satisfied the clinical requirements. Using the resulting optimized intensity distributions as the initial guess, we investigated the possibility of getting trapped in a local minimum. For most of the results presented, we used a lung cancer case. To illustrate the generality of our methods, the results for a prostate case are also presented. For both dose-volume and EUD based objective functions, Newton's method far

  2. Recombinant Temporal Aberration Detection Algorithms for Enhanced Biosurveillance

    PubMed Central

    Murphy, Sean Patrick; Burkom, Howard

    2008-01-01

    Objective Broadly, this research aims to improve the outbreak detection performance and, therefore, the cost effectiveness of automated syndromic surveillance systems by building novel, recombinant temporal aberration detection algorithms from components of previously developed detectors. Methods This study decomposes existing temporal aberration detection algorithms into two sequential stages and investigates the individual impact of each stage on outbreak detection performance. The data forecasting stage (Stage 1) generates predictions of time series values a certain number of time steps in the future based on historical data. The anomaly measure stage (Stage 2) compares features of this prediction to corresponding features of the actual time series to compute a statistical anomaly measure. A Monte Carlo simulation procedure is then used to examine the recombinant algorithms’ ability to detect synthetic aberrations injected into authentic syndromic time series. Results New methods obtained with procedural components of published, sometimes widely used, algorithms were compared to the known methods using authentic datasets with plausible stochastic injected signals. Performance improvements were found for some of the recombinant methods, and these improvements were consistent over a range of data types, outbreak types, and outbreak sizes. For gradual outbreaks, the WEWD MovAvg7+WEWD Z-Score recombinant algorithm performed best; for sudden outbreaks, the HW+WEWD Z-Score performed best. Conclusion This decomposition was found not only to yield valuable insight into the effects of the aberration detection algorithms but also to produce novel combinations of data forecasters and anomaly measures with enhanced detection performance. PMID:17947614

  3. Ocean observations with EOS/MODIS: Algorithm development and post launch studies

    NASA Technical Reports Server (NTRS)

    Gordon, Howard R.

    1995-01-01

    An investigation of the influence of stratospheric aerosol on the performance of the atmospheric correction algorithm was carried out. The results indicate how the performance of the algorithm is degraded if the stratospheric aerosol is ignored. Use of the MODIS 1380 nm band to effect a correction for stratospheric aerosols was also studied. The development of a multi-layer Monte Carlo radiative transfer code that includes polarization by molecular and aerosol scattering and wind-induced sea surface roughness has been completed. Comparison tests with an existing two-layer successive order of scattering code suggests that both codes are capable of producing top-of-atmosphere radiances with errors usually less than 0.1 percent. An initial set of simulations to study the effects of ignoring the polarization of the the ocean-atmosphere light field, in both the development of the atmospheric correction algorithm and the generation of the lookup tables used for operation of the algorithm, have been completed. An algorithm was developed that can be used to invert the radiance exiting the top and bottom of the atmosphere to yield the columnar optical properties of the atmospheric aerosol under clear sky conditions over the ocean, for aerosol optical thicknesses as large as 2. The algorithm is capable of retrievals with such large optical thicknesses because all significant orders of multiple scattering are included.

  4. A Parallel Genetic Algorithm for Automated Electronic Circuit Design

    NASA Technical Reports Server (NTRS)

    Long, Jason D.; Colombano, Silvano P.; Haith, Gary L.; Stassinopoulos, Dimitris

    2000-01-01

    Parallelized versions of genetic algorithms (GAs) are popular primarily for three reasons: the GA is an inherently parallel algorithm, typical GA applications are very compute intensive, and powerful computing platforms, especially Beowulf-style computing clusters, are becoming more affordable and easier to implement. In addition, the low communication bandwidth required allows the use of inexpensive networking hardware such as standard office ethernet. In this paper we describe a parallel GA and its use in automated high-level circuit design. Genetic algorithms are a type of trial-and-error search technique that are guided by principles of Darwinian evolution. Just as the genetic material of two living organisms can intermix to produce offspring that are better adapted to their environment, GAs expose genetic material, frequently strings of 1s and Os, to the forces of artificial evolution: selection, mutation, recombination, etc. GAs start with a pool of randomly-generated candidate solutions which are then tested and scored with respect to their utility. Solutions are then bred by probabilistically selecting high quality parents and recombining their genetic representations to produce offspring solutions. Offspring are typically subjected to a small amount of random mutation. After a pool of offspring is produced, this process iterates until a satisfactory solution is found or an iteration limit is reached. Genetic algorithms have been applied to a wide variety of problems in many fields, including chemistry, biology, and many engineering disciplines. There are many styles of parallelism used in implementing parallel GAs. One such method is called the master-slave or processor farm approach. In this technique, slave nodes are used solely to compute fitness evaluations (the most time consuming part). The master processor collects fitness scores from the nodes and performs the genetic operators (selection, reproduction, variation, etc.). Because of dependency

  5. BFACF-style algorithms for polygons in the body-centered and face-centered cubic lattices

    NASA Astrophysics Data System (ADS)

    Janse van Rensburg, E. J.; Rechnitzer, A.

    2011-04-01

    In this paper, the elementary moves of the BFACF-algorithm (Aragão de Carvalho and Caracciolo 1983 Phys. Rev. B 27 1635-45, Aragão de Carvalho and Caracciolo 1983 Nucl. Phys. B 215 209-48, Berg and Foester 1981 Phys. Lett. B 106 323-6) for lattice polygons are generalized to elementary moves of BFACF-style algorithms for lattice polygons in the body-centered (BCC) and face-centered (FCC) cubic lattices. We prove that the ergodicity classes of these new elementary moves coincide with the knot types of unrooted polygons in the BCC and FCC lattices and so expand a similar result for the cubic lattice (see Janse van Rensburg and Whittington (1991 J. Phys. A: Math. Gen. 24 5553-67)). Implementations of these algorithms for knotted polygons using the GAS algorithm produce estimates of the minimal length of knotted polygons in the BCC and FCC lattices.

  6. Integrating Algorithm Visualization Video into a First-Year Algorithm and Data Structure Course

    ERIC Educational Resources Information Center

    Crescenzi, Pilu; Malizia, Alessio; Verri, M. Cecilia; Diaz, Paloma; Aedo, Ignacio

    2012-01-01

    In this paper we describe the results that we have obtained while integrating algorithm visualization (AV) movies (strongly tightened with the other teaching material), within a first-year undergraduate course on algorithms and data structures. Our experimental results seem to support the hypothesis that making these movies available significantly…

  7. Brain tissue segmentation in MR images based on a hybrid of MRF and social algorithms.

    PubMed

    Yousefi, Sahar; Azmi, Reza; Zahedi, Morteza

    2012-05-01

    Effective abnormality detection and diagnosis in Magnetic Resonance Images (MRIs) requires a robust segmentation strategy. Since manual segmentation is a time-consuming task which engages valuable human resources, automatic MRI segmentations received an enormous amount of attention. For this goal, various techniques have been applied. However, Markov Random Field (MRF) based algorithms have produced reasonable results in noisy images compared to other methods. MRF seeks a label field which minimizes an energy function. The traditional minimization method, simulated annealing (SA), uses Monte Carlo simulation to access the minimum solution with heavy computation burden. For this reason, MRFs are rarely used in real time processing environments. This paper proposed a novel method based on MRF and a hybrid of social algorithms that contain an ant colony optimization (ACO) and a Gossiping algorithm which can be used for segmenting single and multispectral MRIs in real time environments. Combining ACO with the Gossiping algorithm helps find the better path using neighborhood information. Therefore, this interaction causes the algorithm to converge to an optimum solution faster. Several experiments on phantom and real images were performed. Results indicate that the proposed algorithm outperforms the traditional MRF and hybrid of MRF-ACO in speed and accuracy. Copyright © 2012 Elsevier B.V. All rights reserved.

  8. Evaluation of focused ultrasound algorithms: Issues for reducing pre-focal heating and treatment time.

    PubMed

    Yiannakou, Marinos; Trimikliniotis, Michael; Yiallouras, Christos; Damianou, Christakis

    2016-02-01

    Due to the heating in the pre-focal field the delay between successive movements in high intensity focused ultrasound (HIFU) are sometimes as long as 60s, resulting to treatment time in the order of 2-3h. Because there is generally a requirement to reduce treatment time, we were motivated to explore alternative transducer motion algorithms in order to reduce pre-focal heating and treatment time. A 1 MHz single element transducer with 4 cm diameter and 10 cm focal length was used. A simulation model was developed that estimates the temperature, thermal dose and lesion development in the pre-focal field. The simulated temperature history that was combined with the motion algorithms produced thermal maps in the pre-focal region. Polyacrylimde gel phantom was used to evaluate the induced pre-focal heating for each motion algorithm used, and also was used to assess the accuracy of the simulation model. Three out of the six algorithms having successive steps close to each other, exhibited severe heating in the pre-focal field. Minimal heating was produced with the algorithms having successive steps apart from each other (square, square spiral and random). The last three algorithms were improved further (with small cost in time), thus eliminating completely the pre-focal heating and reducing substantially the treatment time as compared to traditional algorithms. Out of the six algorithms, 3 were successful in eliminating the pre-focal heating completely. Because these 3 algorithms required no delay between successive movements (except in the last part of the motion), the treatment time was reduced by 93%. Therefore, it will be possible in the future, to achieve treatment time of focused ultrasound therapies shorter than 30 min. The rate of ablated volume achieved with one of the proposed algorithms was 71 cm(3)/h. The intention of this pilot study was to demonstrate that the navigation algorithms play the most important role in reducing pre-focal heating. By evaluating in

  9. Lightning Jump Algorithm and Relation to Thunderstorm Cell Tracking, GLM Proxy and Other Meteorological Measurements

    NASA Technical Reports Server (NTRS)

    Schultz, Christopher J.; Carey, Lawrence D.; Cecil, Daniel J.; Bateman, Monte

    2012-01-01

    The lightning jump algorithm has a robust history in correlating upward trends in lightning to severe and hazardous weather occurrence. The algorithm uses the correlation between the physical principles that govern an updraft's ability to produce microphysical and kinematic conditions conducive for electrification and its role in the development of severe weather conditions. Recent work has demonstrated that the lightning jump algorithm concept holds significant promise in the operational realm, aiding in the identification of thunderstorms that have potential to produce severe or hazardous weather. However, a large amount of work still needs to be completed in spite of these positive results. The total lightning jump algorithm is not a stand-alone concept that can be used independent of other meteorological measurements, parameters, and techniques. For example, the algorithm is highly dependent upon thunderstorm tracking to build lightning histories on convective cells. Current tracking methods show that thunderstorm cell tracking is most reliable and cell histories are most accurate when radar information is incorporated with lightning data. In the absence of radar data, the cell tracking is a bit less reliable but the value added by the lightning information is much greater. For optimal application, the algorithm should be integrated with other measurements that assess storm scale properties (e.g., satellite, radar). Therefore, the recent focus of this research effort has been assessing the lightning jump's relation to thunderstorm tracking, meteorological parameters, and its potential uses in operational meteorology. Furthermore, the algorithm must be tailored for the optically-based GOES-R Geostationary Lightning Mapper (GLM), as what has been observed using Very High Frequency Lightning Mapping Array (VHF LMA) measurements will not exactly translate to what will be observed by GLM due to resolution and other instrument differences. Herein, we present some of

  10. A Winner Determination Algorithm for Combinatorial Auctions Based on Hybrid Artificial Fish Swarm Algorithm

    NASA Astrophysics Data System (ADS)

    Zheng, Genrang; Lin, ZhengChun

    The problem of winner determination in combinatorial auctions is a hotspot electronic business, and a NP hard problem. A Hybrid Artificial Fish Swarm Algorithm(HAFSA), which is combined with First Suite Heuristic Algorithm (FSHA) and Artificial Fish Swarm Algorithm (AFSA), is proposed to solve the problem after probing it base on the theories of AFSA. Experiment results show that the HAFSA is a rapidly and efficient algorithm for The problem of winner determining. Compared with Ant colony Optimization Algorithm, it has a good performance with broad and prosperous application.

  11. An algorithm for longitudinal registration of PET/CT images acquired during neoadjuvant chemotherapy in breast cancer: preliminary results.

    PubMed

    Li, Xia; Abramson, Richard G; Arlinghaus, Lori R; Chakravarthy, Anuradha Bapsi; Abramson, Vandana; Mayer, Ingrid; Farley, Jaime; Delbeke, Dominique; Yankeelov, Thomas E

    2012-11-16

    .044), respectively (p = 0.005). The results indicate that the constrained ABA algorithm can accurately align prone breast FDG-PET images acquired at different time points while keeping the tumor from being substantially compressed or distorted. NCT00474604.

  12. EVALUATION OF REGISTRATION, COMPRESSION AND CLASSIFICATION ALGORITHMS

    NASA Technical Reports Server (NTRS)

    Jayroe, R. R.

    1994-01-01

    Several types of algorithms are generally used to process digital imagery such as Landsat data. The most commonly used algorithms perform the task of registration, compression, and classification. Because there are different techniques available for performing registration, compression, and classification, imagery data users need a rationale for selecting a particular approach to meet their particular needs. This collection of registration, compression, and classification algorithms was developed so that different approaches could be evaluated and the best approach for a particular application determined. Routines are included for six registration algorithms, six compression algorithms, and two classification algorithms. The package also includes routines for evaluating the effects of processing on the image data. This collection of routines should be useful to anyone using or developing image processing software. Registration of image data involves the geometrical alteration of the imagery. Registration routines available in the evaluation package include image magnification, mapping functions, partitioning, map overlay, and data interpolation. The compression of image data involves reducing the volume of data needed for a given image. Compression routines available in the package include adaptive differential pulse code modulation, two-dimensional transforms, clustering, vector reduction, and picture segmentation. Classification of image data involves analyzing the uncompressed or compressed image data to produce inventories and maps of areas of similar spectral properties within a scene. The classification routines available include a sequential linear technique and a maximum likelihood technique. The choice of the appropriate evaluation criteria is quite important in evaluating the image processing functions. The user is therefore given a choice of evaluation criteria with which to investigate the available image processing functions. All of the available

  13. Inductive learning of thyroid functional states using the ID3 algorithm. The effect of poor examples on the learning result.

    PubMed

    Forsström, J

    1992-01-01

    The ID3 algorithm for inductive learning was tested using preclassified material for patients suspected to have a thyroid illness. Classification followed a rule-based expert system for the diagnosis of thyroid function. Thus, the knowledge to be learned was limited to the rules existing in the knowledge base of that expert system. The learning capability of the ID3 algorithm was tested with an unselected learning material (with some inherent missing data) and with a selected learning material (no missing data). The selected learning material was a subgroup which formed a part of the unselected learning material. When the number of learning cases was increased, the accuracy of the program improved. When the learning material was large enough, an increase in the learning material did not improve the results further. A better learning result was achieved with the selected learning material not including missing data as compared to unselected learning material. With this material we demonstrate a weakness in the ID3 algorithm: it can not find available information from good example cases if we add poor examples to the data.

  14. Improving the MODIS Global Snow-Mapping Algorithm

    NASA Technical Reports Server (NTRS)

    Klein, Andrew G.; Hall, Dorothy K.; Riggs, George A.

    1997-01-01

    An algorithm (Snowmap) is under development to produce global snow maps at 500 meter resolution on a daily basis using data from the NASA MODIS instrument. MODIS, the Moderate Resolution Imaging Spectroradiometer, will be launched as part of the first Earth Observing System (EOS) platform in 1998. Snowmap is a fully automated, computationally frugal algorithm that will be ready to implement at launch. Forests represent a major limitation to the global mapping of snow cover as a forest canopy both obscures and shadows the snow underneath. Landsat Thematic Mapper (TM) and MODIS Airborne Simulator (MAS) data are used to investigate the changes in reflectance that occur as a forest stand becomes snow covered and to propose changes to the Snowmap algorithm that will improve snow classification accuracy forested areas.

  15. A composition algorithm based on crossmodal taste-music correspondences

    PubMed Central

    Mesz, Bruno; Sigman, Mariano; Trevisan, Marcos A.

    2012-01-01

    While there is broad consensus about the structural similarities between language and music, comparably less attention has been devoted to semantic correspondences between these two ubiquitous manifestations of human culture. We have investigated the relations between music and a narrow and bounded domain of semantics: the words and concepts referring to taste sensations. In a recent work, we found that taste words were consistently mapped to musical parameters. Bitter is associated with low-pitched and continuous music (legato), salty is characterized by silences between notes (staccato), sour is high pitched, dissonant and fast and sweet is consonant, slow and soft (Mesz et al., 2011). Here we extended these ideas, in a synergistic dialog between music and science, investigating whether music can be algorithmically generated from taste-words. We developed and implemented an algorithm that exploits a large corpus of classic and popular songs. New musical pieces were produced by choosing fragments from the corpus and modifying them to minimize their distance to the region in musical space that characterizes each taste. In order to test the capability of the produced music to elicit significant associations with the different tastes, musical pieces were produced and judged by a group of non-musicians. Results showed that participants could decode well above chance the taste-word of the composition. We also discuss how our findings can be expressed in a performance bridging music and cognitive science. PMID:22557952

  16. A comparison of select image-compression algorithms for an electronic still camera

    NASA Technical Reports Server (NTRS)

    Nerheim, Rosalee

    1989-01-01

    This effort is a study of image-compression algorithms for an electronic still camera. An electronic still camera can record and transmit high-quality images without the use of film, because images are stored digitally in computer memory. However, high-resolution images contain an enormous amount of information, and will strain the camera's data-storage system. Image compression will allow more images to be stored in the camera's memory. For the electronic still camera, a compression algorithm that produces a reconstructed image of high fidelity is most important. Efficiency of the algorithm is the second priority. High fidelity and efficiency are more important than a high compression ratio. Several algorithms were chosen for this study and judged on fidelity, efficiency and compression ratio. The transform method appears to be the best choice. At present, the method is compressing images to a ratio of 5.3:1 and producing high-fidelity reconstructed images.

  17. Basis for a neuronal version of Grover's quantum algorithm

    PubMed Central

    Clark, Kevin B.

    2014-01-01

    Grover's quantum (search) algorithm exploits principles of quantum information theory and computation to surpass the strong Church–Turing limit governing classical computers. The algorithm initializes a search field into superposed N (eigen)states to later execute nonclassical “subroutines” involving unitary phase shifts of measured states and to produce root-rate or quadratic gain in the algorithmic time (O(N1/2)) needed to find some “target” solution m. Akin to this fast technological search algorithm, single eukaryotic cells, such as differentiated neurons, perform natural quadratic speed-up in the search for appropriate store-operated Ca2+ response regulation of, among other processes, protein and lipid biosynthesis, cell energetics, stress responses, cell fate and death, synaptic plasticity, and immunoprotection. Such speed-up in cellular decision making results from spatiotemporal dynamics of networked intracellular Ca2+-induced Ca2+ release and the search (or signaling) velocity of Ca2+ wave propagation. As chemical processes, such as the duration of Ca2+ mobilization, become rate-limiting over interstore distances, Ca2+ waves quadratically decrease interstore-travel time from slow saltatory to fast continuous gradients proportional to the square-root of the classical Ca2+ diffusion coefficient, D1/2, matching the computing efficiency of Grover's quantum algorithm. In this Hypothesis and Theory article, I elaborate on these traits using a fire-diffuse-fire model of store-operated cytosolic Ca2+ signaling valid for glutamatergic neurons. Salient model features corresponding to Grover's quantum algorithm are parameterized to meet requirements for the Oracle Hadamard transform and Grover's iteration. A neuronal version of Grover's quantum algorithm figures to benefit signal coincidence detection and integration, bidirectional synaptic plasticity, and other vital cell functions by rapidly selecting, ordering, and/or counting optional response

  18. The effect of algorithms on copy number variant detection.

    PubMed

    Tsuang, Debby W; Millard, Steven P; Ely, Benjamin; Chi, Peter; Wang, Kenneth; Raskind, Wendy H; Kim, Sulgi; Brkanac, Zoran; Yu, Chang-En

    2010-12-30

    The detection of copy number variants (CNVs) and the results of CNV-disease association studies rely on how CNVs are defined, and because array-based technologies can only infer CNVs, CNV-calling algorithms can produce vastly different findings. Several authors have noted the large-scale variability between CNV-detection methods, as well as the substantial false positive and false negative rates associated with those methods. In this study, we use variations of four common algorithms for CNV detection (PennCNV, QuantiSNP, HMMSeg, and cnvPartition) and two definitions of overlap (any overlap and an overlap of at least 40% of the smaller CNV) to illustrate the effects of varying algorithms and definitions of overlap on CNV discovery. We used a 56 K Illumina genotyping array enriched for CNV regions to generate hybridization intensities and allele frequencies for 48 Caucasian schizophrenia cases and 48 age-, ethnicity-, and gender-matched control subjects. No algorithm found a difference in CNV burden between the two groups. However, the total number of CNVs called ranged from 102 to 3,765 across algorithms. The mean CNV size ranged from 46 kb to 787 kb, and the average number of CNVs per subject ranged from 1 to 39. The number of novel CNVs not previously reported in normal subjects ranged from 0 to 212. Motivated by the availability of multiple publicly available genome-wide SNP arrays, investigators are conducting numerous analyses to identify putative additional CNVs in complex genetic disorders. However, the number of CNVs identified in array-based studies, and whether these CNVs are novel or valid, will depend on the algorithm(s) used. Thus, given the variety of methods used, there will be many false positives and false negatives. Both guidelines for the identification of CNVs inferred from high-density arrays and the establishment of a gold standard for validation of CNVs are needed.

  19. Machine-Learning Algorithms to Code Public Health Spending Accounts.

    PubMed

    Brady, Eoghan S; Leider, Jonathon P; Resnick, Beth A; Alfonso, Y Natalia; Bishai, David

    Government public health expenditure data sets require time- and labor-intensive manipulation to summarize results that public health policy makers can use. Our objective was to compare the performances of machine-learning algorithms with manual classification of public health expenditures to determine if machines could provide a faster, cheaper alternative to manual classification. We used machine-learning algorithms to replicate the process of manually classifying state public health expenditures, using the standardized public health spending categories from the Foundational Public Health Services model and a large data set from the US Census Bureau. We obtained a data set of 1.9 million individual expenditure items from 2000 to 2013. We collapsed these data into 147 280 summary expenditure records, and we followed a standardized method of manually classifying each expenditure record as public health, maybe public health, or not public health. We then trained 9 machine-learning algorithms to replicate the manual process. We calculated recall, precision, and coverage rates to measure the performance of individual and ensembled algorithms. Compared with manual classification, the machine-learning random forests algorithm produced 84% recall and 91% precision. With algorithm ensembling, we achieved our target criterion of 90% recall by using a consensus ensemble of ≥6 algorithms while still retaining 93% coverage, leaving only 7% of the summary expenditure records unclassified. Machine learning can be a time- and cost-saving tool for estimating public health spending in the United States. It can be used with standardized public health spending categories based on the Foundational Public Health Services model to help parse public health expenditure information from other types of health-related spending, provide data that are more comparable across public health organizations, and evaluate the impact of evidence-based public health resource allocation.

  20. Optimal sensor placement for spatial lattice structure based on genetic algorithms

    NASA Astrophysics Data System (ADS)

    Liu, Wei; Gao, Wei-cheng; Sun, Yi; Xu, Min-jian

    2008-10-01

    Optimal sensor placement technique plays a key role in structural health monitoring of spatial lattice structures. This paper considers the problem of locating sensors on a spatial lattice structure with the aim of maximizing the data information so that structural dynamic behavior can be fully characterized. Based on the criterion of optimal sensor placement for modal test, an improved genetic algorithm is introduced to find the optimal placement of sensors. The modal strain energy (MSE) and the modal assurance criterion (MAC) have been taken as the fitness function, respectively, so that three placement designs were produced. The decimal two-dimension array coding method instead of binary coding method is proposed to code the solution. Forced mutation operator is introduced when the identical genes appear via the crossover procedure. A computational simulation of a 12-bay plain truss model has been implemented to demonstrate the feasibility of the three optimal algorithms above. The obtained optimal sensor placements using the improved genetic algorithm are compared with those gained by exiting genetic algorithm using the binary coding method. Further the comparison criterion based on the mean square error between the finite element method (FEM) mode shapes and the Guyan expansion mode shapes identified by data-driven stochastic subspace identification (SSI-DATA) method are employed to demonstrate the advantage of the different fitness function. The results showed that some innovations in genetic algorithm proposed in this paper can enlarge the genes storage and improve the convergence of the algorithm. More importantly, the three optimal sensor placement methods can all provide the reliable results and identify the vibration characteristics of the 12-bay plain truss model accurately.

  1. Quick fuzzy backpropagation algorithm.

    PubMed

    Nikov, A; Stoeva, S

    2001-03-01

    A modification of the fuzzy backpropagation (FBP) algorithm called QuickFBP algorithm is proposed, where the computation of the net function is significantly quicker. It is proved that the FBP algorithm is of exponential time complexity, while the QuickFBP algorithm is of polynomial time complexity. Convergence conditions of the QuickFBP, resp. the FBP algorithm are defined and proved for: (1) single output neural networks in case of training patterns with different targets; and (2) multiple output neural networks in case of training patterns with equivalued target vector. They support the automation of the weights training process (quasi-unsupervised learning) establishing the target value(s) depending on the network's input values. In these cases the simulation results confirm the convergence of both algorithms. An example with a large-sized neural network illustrates the significantly greater training speed of the QuickFBP rather than the FBP algorithm. The adaptation of an interactive web system to users on the basis of the QuickFBP algorithm is presented. Since the QuickFBP algorithm ensures quasi-unsupervised learning, this implies its broad applicability in areas of adaptive and adaptable interactive systems, data mining, etc. applications.

  2. Remote sensing imagery classification using multi-objective gravitational search algorithm

    NASA Astrophysics Data System (ADS)

    Zhang, Aizhu; Sun, Genyun; Wang, Zhenjie

    2016-10-01

    Simultaneous optimization of different validity measures can capture different data characteristics of remote sensing imagery (RSI) and thereby achieving high quality classification results. In this paper, two conflicting cluster validity indices, the Xie-Beni (XB) index and the fuzzy C-means (FCM) (Jm) measure, are integrated with a diversity-enhanced and memory-based multi-objective gravitational search algorithm (DMMOGSA) to present a novel multi-objective optimization based RSI classification method. In this method, the Gabor filter method is firstly implemented to extract texture features of RSI. Then, the texture features are syncretized with the spectral features to construct the spatial-spectral feature space/set of the RSI. Afterwards, cluster of the spectral-spatial feature set is carried out on the basis of the proposed method. To be specific, cluster centers are randomly generated initially. After that, the cluster centers are updated and optimized adaptively by employing the DMMOGSA. Accordingly, a set of non-dominated cluster centers are obtained. Therefore, numbers of image classification results of RSI are produced and users can pick up the most promising one according to their problem requirements. To quantitatively and qualitatively validate the effectiveness of the proposed method, the proposed classification method was applied to classifier two aerial high-resolution remote sensing imageries. The obtained classification results are compared with that produced by two single cluster validity index based and two state-of-the-art multi-objective optimization algorithms based classification results. Comparison results show that the proposed method can achieve more accurate RSI classification.

  3. Symmetric Stream Cipher using Triple Transposition Key Method and Base64 Algorithm for Security Improvement

    NASA Astrophysics Data System (ADS)

    Nurdiyanto, Heri; Rahim, Robbi; Wulan, Nur

    2017-12-01

    Symmetric type cryptography algorithm is known many weaknesses in encryption process compared with asymmetric type algorithm, symmetric stream cipher are algorithm that works on XOR process between plaintext and key, to improve the security of symmetric stream cipher algorithm done improvisation by using Triple Transposition Key which developed from Transposition Cipher and also use Base64 algorithm for encryption ending process, and from experiment the ciphertext that produced good enough and very random.

  4. New recursive-least-squares algorithms for nonlinear active control of sound and vibration using neural networks.

    PubMed

    Bouchard, M

    2001-01-01

    In recent years, a few articles describing the use of neural networks for nonlinear active control of sound and vibration were published. Using a control structure with two multilayer feedforward neural networks (one as a nonlinear controller and one as a nonlinear plant model), steepest descent algorithms based on two distinct gradient approaches were introduced for the training of the controller network. The two gradient approaches were sometimes called the filtered-x approach and the adjoint approach. Some recursive-least-squares algorithms were also introduced, using the adjoint approach. In this paper, an heuristic procedure is introduced for the development of recursive-least-squares algorithms based on the filtered-x and the adjoint gradient approaches. This leads to the development of new recursive-least-squares algorithms for the training of the controller neural network in the two networks structure. These new algorithms produce a better convergence performance than previously published algorithms. Differences in the performance of algorithms using the filtered-x and the adjoint gradient approaches are discussed in the paper. The computational load of the algorithms discussed in the paper is evaluated for multichannel systems of nonlinear active control. Simulation results are presented to compare the convergence performance of the algorithms, showing the convergence gain provided by the new algorithms.

  5. A memory-efficient staining algorithm in 3D seismic modelling and imaging

    NASA Astrophysics Data System (ADS)

    Jia, Xiaofeng; Yang, Lu

    2017-08-01

    The staining algorithm has been proven to generate high signal-to-noise ratio (S/N) images in poorly illuminated areas in two-dimensional cases. In the staining algorithm, the stained wavefield relevant to the target area and the regular source wavefield forward propagate synchronously. Cross-correlating these two wavefields with the backward propagated receiver wavefield separately, we obtain two images: the local image of the target area and the conventional reverse time migration (RTM) image. This imaging process costs massive computer memory for wavefield storage, especially in large scale three-dimensional cases. To make the staining algorithm applicable to three-dimensional RTM, we develop a method to implement the staining algorithm in three-dimensional acoustic modelling in a standard staggered grid finite difference (FD) scheme. The implementation is adaptive to the order of spatial accuracy of the FD operator. The method can be applied to elastic, electromagnetic, and other wave equations. Taking the memory requirement into account, we adopt a random boundary condition (RBC) to backward extrapolate the receiver wavefield and reconstruct it by reverse propagation using the final wavefield snapshot only. Meanwhile, we forward simulate the stained wavefield and source wavefield simultaneously using the nearly perfectly matched layer (NPML) boundary condition. Experiments on a complex geologic model indicate that the RBC-NPML collaborative strategy not only minimizes the memory consumption but also guarantees high quality imaging results. We apply the staining algorithm to three-dimensional RTM via the proposed strategy. Numerical results show that our staining algorithm can produce high S/N images in the target areas with other structures effectively muted.

  6. A low-dispersion, exactly energy-charge-conserving semi-implicit relativistic particle-in-cell algorithm

    NASA Astrophysics Data System (ADS)

    Chen, Guangye; Luis, Chacon; Bird, Robert; Stark, David; Yin, Lin; Albright, Brian

    2017-10-01

    Leap-frog based explicit algorithms, either ``energy-conserving'' or ``momentum-conserving'', do not conserve energy discretely. Time-centered fully implicit algorithms can conserve discrete energy exactly, but introduce large dispersion errors in the light-wave modes, regardless of timestep sizes. This can lead to intolerable simulation errors where highly accurate light propagation is needed (e.g. laser-plasma interactions, LPI). In this study, we selectively combine the leap-frog and Crank-Nicolson methods to produce a low-dispersion, exactly energy-and-charge-conserving PIC algorithm. Specifically, we employ the leap-frog method for Maxwell equations, and the Crank-Nicolson method for particle equations. Such an algorithm admits exact global energy conservation, exact local charge conservation, and preserves the dispersion properties of the leap-frog method for the light wave. The algorithm has been implemented in a code named iVPIC, based on the VPIC code developed at LANL. We will present numerical results that demonstrate the properties of the scheme with sample test problems (e.g. Weibel instability run for 107 timesteps, and LPI applications.

  7. Algorithms in Discrepancy Theory and Lattices

    NASA Astrophysics Data System (ADS)

    Ramadas, Harishchandra

    This thesis deals with algorithmic problems in discrepancy theory and lattices, and is based on two projects I worked on while at the University of Washington in Seattle. A brief overview is provided in Chapter 1 (Introduction). Chapter 2 covers joint work with Avi Levy and Thomas Rothvoss in the field of discrepancy minimization. A well-known theorem of Spencer shows that any set system with n sets over n elements admits a coloring of discrepancy O(√n). While the original proof was non-constructive, recent progress brought polynomial time algorithms by Bansal, Lovett and Meka, and Rothvoss. All those algorithms are randomized, even though Bansal's algorithm admitted a complicated derandomization. We propose an elegant deterministic polynomial time algorithm that is inspired by Lovett-Meka as well as the Multiplicative Weight Update method. The algorithm iteratively updates a fractional coloring while controlling the exponential weights that are assigned to the set constraints. A conjecture by Meka suggests that Spencer's bound can be generalized to symmetric matrices. We prove that n x n matrices that are block diagonal with block size q admit a coloring of discrepancy O(√n . √log(q)). Bansal, Dadush and Garg recently gave a randomized algorithm to find a vector x with entries in {-1,1} with ∥Ax∥infinity ≤ O(√log n) in polynomial time, where A is any matrix whose columns have length at most 1. We show that our method can be used to deterministically obtain such a vector. In Chapter 3, we discuss a result in the broad area of lattices and integer optimization, in joint work with Rebecca Hoberg, Thomas Rothvoss and Xin Yang. The number balancing (NBP) problem is the following: given real numbers a1,...,an in [0,1], find two disjoint subsets I1,I2 of [ n] so that the difference |sumi∈I1a i - sumi∈I2ai| of their sums is minimized. An application of the pigeonhole principle shows that there is always a solution where the difference is at most O √n/2

  8. Automation of a high risk medication regime algorithm in a home health care population.

    PubMed

    Olson, Catherine H; Dierich, Mary; Westra, Bonnie L

    2014-10-01

    Create an automated algorithm for predicting elderly patients' medication-related risks for readmission and validate it by comparing results with a manual analysis of the same patient population. Outcome and Assessment Information Set (OASIS) and medication data were reused from a previous, manual study of 911 patients from 15 Medicare-certified home health care agencies. The medication data was converted into standardized drug codes using APIs managed by the National Library of Medicine (NLM), and then integrated in an automated algorithm that calculates patients' high risk medication regime scores (HRMRs). A comparison of the results between algorithm and manual process was conducted to determine how frequently the HRMR scores were derived which are predictive of readmission. HRMR scores are composed of polypharmacy (number of drugs), Potentially Inappropriate Medications (PIM) (drugs risky to the elderly), and Medication Regimen Complexity Index (MRCI) (complex dose forms, instructions or administration). The algorithm produced polypharmacy, PIM, and MRCI scores that matched with 99%, 87% and 99% of the scores, respectively, from the manual analysis. Imperfect match rates resulted from discrepancies in how drugs were classified and coded by the manual analysis vs. the automated algorithm. HRMR rules lack clarity, resulting in clinical judgments for manual coding that were difficult to replicate in the automated analysis. The high comparison rates for the three measures suggest that an automated clinical tool could use patients' medication records to predict their risks of avoidable readmissions. Copyright © 2014 Elsevier Inc. All rights reserved.

  9. Comparison of human observer and algorithmic target detection in nonurban forward-looking infrared imagery

    NASA Astrophysics Data System (ADS)

    Weber, Bruce A.

    2005-07-01

    We have performed an experiment that compares the performance of human observers with that of a robust algorithm for the detection of targets in difficult, nonurban forward-looking infrared imagery. Our purpose was to benchmark the comparison and document performance differences for future algorithm improvement. The scale-insensitive detection algorithm, used as a benchmark by the Night Vision Electronic Sensors Directorate for algorithm evaluation, employed a combination of contrastlike features to locate targets. Detection receiver operating characteristic curves and observer-confidence analyses were used to compare human and algorithmic responses and to gain insight into differences. The test database contained ground targets, in natural clutter, whose detectability, as judged by human observers, ranged from easy to very difficult. In general, as compared with human observers, the algorithm detected most of the same targets, but correlated confidence with correct detections poorly and produced many more false alarms at any useful level of performance. Though characterizing human performance was not the intent of this study, results suggest that previous observational experience was not a strong predictor of human performance, and that combining individual human observations by majority vote significantly reduced false-alarm rates.

  10. Magnetic Resonance Poroelastography: An Algorithm for Estimating the Mechanical Properties of Fluid-Saturated Soft Tissues

    PubMed Central

    Perriñez, Phillip R.; Kennedy, Francis E.; Van Houten, Elijah E. W.; Weaver, John B.; Paulsen, Keith D.

    2010-01-01

    Magnetic Resonance Poroelastography (MRPE) is introduced as an alternative to single-phase model-based elastographic reconstruction methods. A three-dimensional (3D) finite element poroelastic inversion algorithm was developed to recover the mechanical properties of fluid-saturated tissues. The performance of this algorithm was assessed through a variety of numerical experiments, using synthetic data to probe its stability and sensitivity to the relevant model parameters. Preliminary results suggest the algorithm is robust in the presence of noise and capable of producing accurate assessments of the underlying mechanical properties in simulated phantoms. Further, a 3D time-harmonic motion field was recorded for a poroelastic phantom containing a single cylindrical inclusion and used to assess the feasibility of MRPE image reconstruction from experimental data. The elastograms obtained from the proposed poroelastic algorithm demonstrate significant improvement over linearly elastic MRE images generated using the same data. In addition, MRPE offers the opportunity to estimate the time-harmonic pressure field resulting from tissue excitation, highlighting the potential for its application in the diagnosis and monitoring of disease processes associated with changes in interstitial pressure. PMID:20199912

  11. An algorithm for 4D CT image sorting using spatial continuity.

    PubMed

    Li, Chen; Liu, Jie

    2013-01-01

    4D CT, which could locate the position of the movement of the tumor in the entire respiratory cycle and reduce image artifacts effectively, has been widely used in making radiation therapy of tumors. The current 4D CT methods required external surrogates of respiratory motion obtained from extra instruments. However, respiratory signals recorded by these external makers may not always accurately represent the internal tumor and organ movements, especially when irregular breathing patterns happened. In this paper we have proposed a novel automatic 4D CT sorting algorithm that performs without these external surrogates. The sorting algorithm requires collecting the image data with a cine scan protocol. Beginning with the first couch position, images from the adjacent couch position are selected out according to spatial continuity. The process is continued until images from all couch positions are sorted and the entire 3D volume is produced. The algorithm is verified by respiratory phantom image data and clinical image data. The primary test results show that the 4D CT images created by our algorithm have eliminated the motion artifacts effectively and clearly demonstrated the movement of tumor and organ in the breath period.

  12. Accelerated fast iterative shrinkage thresholding algorithms for sparsity-regularized cone-beam CT image reconstruction

    PubMed Central

    Xu, Qiaofeng; Yang, Deshan; Tan, Jun; Sawatzky, Alex; Anastasio, Mark A.

    2016-01-01

    Purpose: The development of iterative image reconstruction algorithms for cone-beam computed tomography (CBCT) remains an active and important research area. Even with hardware acceleration, the overwhelming majority of the available 3D iterative algorithms that implement nonsmooth regularizers remain computationally burdensome and have not been translated for routine use in time-sensitive applications such as image-guided radiation therapy (IGRT). In this work, two variants of the fast iterative shrinkage thresholding algorithm (FISTA) are proposed and investigated for accelerated iterative image reconstruction in CBCT. Methods: Algorithm acceleration was achieved by replacing the original gradient-descent step in the FISTAs by a subproblem that is solved by use of the ordered subset simultaneous algebraic reconstruction technique (OS-SART). Due to the preconditioning matrix adopted in the OS-SART method, two new weighted proximal problems were introduced and corresponding fast gradient projection-type algorithms were developed for solving them. We also provided efficient numerical implementations of the proposed algorithms that exploit the massive data parallelism of multiple graphics processing units. Results: The improved rates of convergence of the proposed algorithms were quantified in computer-simulation studies and by use of clinical projection data corresponding to an IGRT study. The accelerated FISTAs were shown to possess dramatically improved convergence properties as compared to the standard FISTAs. For example, the number of iterations to achieve a specified reconstruction error could be reduced by an order of magnitude. Volumetric images reconstructed from clinical data were produced in under 4 min. Conclusions: The FISTA achieves a quadratic convergence rate and can therefore potentially reduce the number of iterations required to produce an image of a specified image quality as compared to first-order methods. We have proposed and investigated

  13. An algorithm for optimal fusion of atlases with different labeling protocols

    PubMed Central

    Iglesias, Juan Eugenio; Sabuncu, Mert Rory; Aganj, Iman; Bhatt, Priyanka; Casillas, Christen; Salat, David; Boxer, Adam; Fischl, Bruce; Van Leemput, Koen

    2014-01-01

    In this paper we present a novel label fusion algorithm suited for scenarios in which different manual delineation protocols with potentially disparate structures have been used to annotate the training scans (hereafter referred to as “atlases”). Such scenarios arise when atlases have missing structures, when they have been labeled with different levels of detail, or when they have been taken from different heterogeneous databases. The proposed algorithm can be used to automatically label a novel scan with any of the protocols from the training data. Further, it enables us to generate new labels that are not present in any delineation protocol by defining intersections on the underling labels. We first use probabilistic models of label fusion to generalize three popular label fusion techniques to the multi-protocol setting: majority voting, semi-locally weighted voting and STAPLE. Then, we identify some shortcomings of the generalized methods, namely the inability to produce meaningful posterior probabilities for the different labels (majority voting, semi-locally weighted voting) and to exploit the similarities between the atlases (all three methods). Finally, we propose a novel generative label fusion model that can overcome these drawbacks. We use the proposed method to combine four brain MRI datasets labeled with different protocols (with a total of 102 unique labeled structures) to produce segmentations of 148 brain regions. Using cross-validation, we show that the proposed algorithm outperforms the generalizations of majority voting, semi-locally weighted voting and STAPLE (mean Dice score 83%, vs. 77%, 80% and 79%, respectively). We also evaluated the proposed algorithm in an aging study, successfully reproducing some well-known results in cortical and subcortical structures. PMID:25463466

  14. Development and test results of a flight management algorithm for fuel conservative descents in a time-based metered traffic environment

    NASA Technical Reports Server (NTRS)

    Knox, C. E.; Cannon, D. G.

    1980-01-01

    A simple flight management descent algorithm designed to improve the accuracy of delivering an airplane in a fuel-conservative manner to a metering fix at a time designated by air traffic control was developed and flight tested. This algorithm provides a three dimensional path with terminal area time constraints (four dimensional) for an airplane to make an idle thrust, clean configured (landing gear up, flaps zero, and speed brakes retracted) descent to arrive at the metering fix at a predetermined time, altitude, and airspeed. The descent path was calculated for a constant Mach/airspeed schedule from linear approximations of airplane performance with considerations given for gross weight, wind, and nonstandard pressure and temperature effects. The flight management descent algorithm is described. The results of the flight tests flown with the Terminal Configured Vehicle airplane are presented.

  15. Rapid execution of fan beam image reconstruction algorithms using efficient computational techniques and special-purpose processors

    NASA Astrophysics Data System (ADS)

    Gilbert, B. K.; Robb, R. A.; Chu, A.; Kenue, S. K.; Lent, A. H.; Swartzlander, E. E., Jr.

    1981-02-01

    Rapid advances during the past ten years of several forms of computer-assisted tomography (CT) have resulted in the development of numerous algorithms to convert raw projection data into cross-sectional images. These reconstruction algorithms are either 'iterative,' in which a large matrix algebraic equation is solved by successive approximation techniques; or 'closed form'. Continuing evolution of the closed form algorithms has allowed the newest versions to produce excellent reconstructed images in most applications. This paper will review several computer software and special-purpose digital hardware implementations of closed form algorithms, either proposed during the past several years by a number of workers or actually implemented in commercial or research CT scanners. The discussion will also cover a number of recently investigated algorithmic modifications which reduce the amount of computation required to execute the reconstruction process, as well as several new special-purpose digital hardware implementations under development in laboratories at the Mayo Clinic.

  16. Preliminary test results of a flight management algorithm for fuel conservative descents in a time based metered traffic environment. [flight tests of an algorithm to minimize fuel consumption of aircraft based on flight time

    NASA Technical Reports Server (NTRS)

    Knox, C. E.; Cannon, D. G.

    1979-01-01

    A flight management algorithm designed to improve the accuracy of delivering the airplane fuel efficiently to a metering fix at a time designated by air traffic control is discussed. The algorithm provides a 3-D path with time control (4-D) for a test B 737 airplane to make an idle thrust, clean configured descent to arrive at the metering fix at a predetermined time, altitude, and airspeed. The descent path is calculated for a constant Mach/airspeed schedule from linear approximations of airplane performance with considerations given for gross weight, wind, and nonstandard pressure and temperature effects. The flight management descent algorithms and the results of the flight tests are discussed.

  17. EIT image regularization by a new Multi-Objective Simulated Annealing algorithm.

    PubMed

    Castro Martins, Thiago; Sales Guerra Tsuzuki, Marcos

    2015-01-01

    Multi-Objective Optimization can be used to produce regularized Electrical Impedance Tomography (EIT) images where the weight of the regularization term is not known a priori. This paper proposes a novel Multi-Objective Optimization algorithm based on Simulated Annealing tailored for EIT image reconstruction. Images are reconstructed from experimental data and compared with images from other Multi and Single Objective optimization methods. A significant performance enhancement from traditional techniques can be inferred from the results.

  18. A Modified Decision Tree Algorithm Based on Genetic Algorithm for Mobile User Classification Problem

    PubMed Central

    Liu, Dong-sheng; Fan, Shu-jiang

    2014-01-01

    In order to offer mobile customers better service, we should classify the mobile user firstly. Aimed at the limitations of previous classification methods, this paper puts forward a modified decision tree algorithm for mobile user classification, which introduced genetic algorithm to optimize the results of the decision tree algorithm. We also take the context information as a classification attributes for the mobile user and we classify the context into public context and private context classes. Then we analyze the processes and operators of the algorithm. At last, we make an experiment on the mobile user with the algorithm, we can classify the mobile user into Basic service user, E-service user, Plus service user, and Total service user classes and we can also get some rules about the mobile user. Compared to C4.5 decision tree algorithm and SVM algorithm, the algorithm we proposed in this paper has higher accuracy and more simplicity. PMID:24688389

  19. The evaluation of the OSGLR algorithm for restructurable controls

    NASA Technical Reports Server (NTRS)

    Bonnice, W. F.; Wagner, E.; Hall, S. R.; Motyka, P.

    1986-01-01

    The detection and isolation of commercial aircraft control surface and actuator failures using the orthogonal series generalized likelihood ratio (OSGLR) test was evaluated. The OSGLR algorithm was chosen as the most promising algorithm based on a preliminary evaluation of three failure detection and isolation (FDI) algorithms (the detection filter, the generalized likelihood ratio test, and the OSGLR test) and a survey of the literature. One difficulty of analytic FDI techniques and the OSGLR algorithm in particular is their sensitivity to modeling errors. Therefore, methods of improving the robustness of the algorithm were examined with the incorporation of age-weighting into the algorithm being the most effective approach, significantly reducing the sensitivity of the algorithm to modeling errors. The steady-state implementation of the algorithm based on a single cruise linear model was evaluated using a nonlinear simulation of a C-130 aircraft. A number of off-nominal no-failure flight conditions including maneuvers, nonzero flap deflections, different turbulence levels and steady winds were tested. Based on the no-failure decision functions produced by off-nominal flight conditions, the failure detection performance at the nominal flight condition was determined. The extension of the algorithm to a wider flight envelope by scheduling the linear models used by the algorithm on dynamic pressure and flap deflection was also considered. Since simply scheduling the linear models over the entire flight envelope is unlikely to be adequate, scheduling of the steady-state implentation of the algorithm was briefly investigated.

  20. Distributed k-Means Algorithm and Fuzzy c-Means Algorithm for Sensor Networks Based on Multiagent Consensus Theory.

    PubMed

    Qin, Jiahu; Fu, Weiming; Gao, Huijun; Zheng, Wei Xing

    2016-03-03

    This paper is concerned with developing a distributed k-means algorithm and a distributed fuzzy c-means algorithm for wireless sensor networks (WSNs) where each node is equipped with sensors. The underlying topology of the WSN is supposed to be strongly connected. The consensus algorithm in multiagent consensus theory is utilized to exchange the measurement information of the sensors in WSN. To obtain a faster convergence speed as well as a higher possibility of having the global optimum, a distributed k-means++ algorithm is first proposed to find the initial centroids before executing the distributed k-means algorithm and the distributed fuzzy c-means algorithm. The proposed distributed k-means algorithm is capable of partitioning the data observed by the nodes into measure-dependent groups which have small in-group and large out-group distances, while the proposed distributed fuzzy c-means algorithm is capable of partitioning the data observed by the nodes into different measure-dependent groups with degrees of membership values ranging from 0 to 1. Simulation results show that the proposed distributed algorithms can achieve almost the same results as that given by the centralized clustering algorithms.

  1. Fast stochastic algorithm for simulating evolutionary population dynamics

    NASA Astrophysics Data System (ADS)

    Tsimring, Lev; Hasty, Jeff; Mather, William

    2012-02-01

    Evolution and co-evolution of ecological communities are stochastic processes often characterized by vastly different rates of reproduction and mutation and a coexistence of very large and very small sub-populations of co-evolving species. This creates serious difficulties for accurate statistical modeling of evolutionary dynamics. In this talk, we introduce a new exact algorithm for fast fully stochastic simulations of birth/death/mutation processes. It produces a significant speedup compared to the direct stochastic simulation algorithm in a typical case when the total population size is large and the mutation rates are much smaller than birth/death rates. We illustrate the performance of the algorithm on several representative examples: evolution on a smooth fitness landscape, NK model, and stochastic predator-prey system.

  2. Multimodal optimization by using hybrid of artificial bee colony algorithm and BFGS algorithm

    NASA Astrophysics Data System (ADS)

    Anam, S.

    2017-10-01

    Optimization has become one of the important fields in Mathematics. Many problems in engineering and science can be formulated into optimization problems. They maybe have many local optima. The optimization problem with many local optima, known as multimodal optimization problem, is how to find the global solution. Several metaheuristic methods have been proposed to solve multimodal optimization problems such as Particle Swarm Optimization (PSO), Genetics Algorithm (GA), Artificial Bee Colony (ABC) algorithm, etc. The performance of the ABC algorithm is better than or similar to those of other population-based algorithms with the advantage of employing a fewer control parameters. The ABC algorithm also has the advantages of strong robustness, fast convergence and high flexibility. However, it has the disadvantages premature convergence in the later search period. The accuracy of the optimal value cannot meet the requirements sometimes. Broyden-Fletcher-Goldfarb-Shanno (BFGS) algorithm is a good iterative method for finding a local optimum. Compared with other local optimization methods, the BFGS algorithm is better. Based on the advantages of the ABC algorithm and the BFGS algorithm, this paper proposes a hybrid of the artificial bee colony algorithm and the BFGS algorithm to solve the multimodal optimization problem. The first step is that the ABC algorithm is run to find a point. In the second step is that the point obtained by the first step is used as an initial point of BFGS algorithm. The results show that the hybrid method can overcome from the basic ABC algorithm problems for almost all test function. However, if the shape of function is flat, the proposed method cannot work well.

  3. Improved Global Ocean Color Using Polymer Algorithm

    NASA Astrophysics Data System (ADS)

    Steinmetz, Francois; Ramon, Didier; Deschamps, ierre-Yves; Stum, Jacques

    2010-12-01

    A global ocean color product has been developed based on the use of the POLYMER algorithm to correct atmospheric scattering and sun glint and to process the data to a Level 2 ocean color product. Thanks to the use of this algorithm, the coverage and accuracy of the MERIS ocean color product have been significantly improved when compared to the standard product, therefore increasing its usefulness for global ocean monitor- ing applications like GLOBCOLOUR. We will present the latest developments of the algorithm, its first application to MODIS data and its validation against in-situ data from the MERMAID database. Examples will be shown of global NRT chlorophyll maps produced by CLS with POLYMER for operational applications like fishing or oil and gas industry, as well as its use by Scripps for a NASA study of the Beaufort and Chukchi seas.

  4. Economizing Education: Assessment Algorithms and Calculative Agencies

    ERIC Educational Resources Information Center

    O'Keeffe, Cormac

    2017-01-01

    International Large Scale Assessments have been producing data about educational attainment for over 60 years. More recently however, these assessments as tests have become digitally and computationally complex and increasingly rely on the calculative work performed by algorithms. In this article I first consider the coordination of relations…

  5. Gradient Evolution-based Support Vector Machine Algorithm for Classification

    NASA Astrophysics Data System (ADS)

    Zulvia, Ferani E.; Kuo, R. J.

    2018-03-01

    This paper proposes a classification algorithm based on a support vector machine (SVM) and gradient evolution (GE) algorithms. SVM algorithm has been widely used in classification. However, its result is significantly influenced by the parameters. Therefore, this paper aims to propose an improvement of SVM algorithm which can find the best SVMs’ parameters automatically. The proposed algorithm employs a GE algorithm to automatically determine the SVMs’ parameters. The GE algorithm takes a role as a global optimizer in finding the best parameter which will be used by SVM algorithm. The proposed GE-SVM algorithm is verified using some benchmark datasets and compared with other metaheuristic-based SVM algorithms. The experimental results show that the proposed GE-SVM algorithm obtains better results than other algorithms tested in this paper.

  6. Quantum Adiabatic Algorithms and Large Spin Tunnelling

    NASA Technical Reports Server (NTRS)

    Boulatov, A.; Smelyanskiy, V. N.

    2003-01-01

    We provide a theoretical study of the quantum adiabatic evolution algorithm with different evolution paths proposed in this paper. The algorithm is applied to a random binary optimization problem (a version of the 3-Satisfiability problem) where the n-bit cost function is symmetric with respect to the permutation of individual bits. The evolution paths are produced, using the generic control Hamiltonians H (r) that preserve the bit symmetry of the underlying optimization problem. In the case where the ground state of H(0) coincides with the totally-symmetric state of an n-qubit system the algorithm dynamics is completely described in terms of the motion of a spin-n/2. We show that different control Hamiltonians can be parameterized by a set of independent parameters that are expansion coefficients of H (r) in a certain universal set of operators. Only one of these operators can be responsible for avoiding the tunnelling in the spin-n/2 system during the quantum adiabatic algorithm. We show that it is possible to select a coefficient for this operator that guarantees a polynomial complexity of the algorithm for all problem instances. We show that a successful evolution path of the algorithm always corresponds to the trajectory of a classical spin-n/2 and provide a complete characterization of such paths.

  7. Knowledge-based tracking algorithm

    NASA Astrophysics Data System (ADS)

    Corbeil, Allan F.; Hawkins, Linda J.; Gilgallon, Paul F.

    1990-10-01

    This paper describes the Knowledge-Based Tracking (KBT) algorithm for which a real-time flight test demonstration was recently conducted at Rome Air Development Center (RADC). In KBT processing, the radar signal in each resolution cell is thresholded at a lower than normal setting to detect low RCS targets. This lower threshold produces a larger than normal false alarm rate. Therefore, additional signal processing including spectral filtering, CFAR and knowledge-based acceptance testing are performed to eliminate some of the false alarms. TSC's knowledge-based Track-Before-Detect (TBD) algorithm is then applied to the data from each azimuth sector to detect target tracks. In this algorithm, tentative track templates are formed for each threshold crossing and knowledge-based association rules are applied to the range, Doppler, and azimuth measurements from successive scans. Lastly, an M-association out of N-scan rule is used to declare a detection. This scan-to-scan integration enhances the probability of target detection while maintaining an acceptably low output false alarm rate. For a real-time demonstration of the KBT algorithm, the L-band radar in the Surveillance Laboratory (SL) at RADC was used to illuminate a small Cessna 310 test aircraft. The received radar signal wa digitized and processed by a ST-100 Array Processor and VAX computer network in the lab. The ST-100 performed all of the radar signal processing functions, including Moving Target Indicator (MTI) pulse cancelling, FFT Doppler filtering, and CFAR detection. The VAX computers performed the remaining range-Doppler clustering, beamsplitting and TBD processing functions. The KBT algorithm provided a 9.5 dB improvement relative to single scan performance with a nominal real time delay of less than one second between illumination and display.

  8. Parallel Algorithms for Computational Models of Geophysical Systems

    NASA Astrophysics Data System (ADS)

    Carrillo Ledesma, A.; Herrera, I.; de la Cruz, L. M.; Hernández, G.; Grupo de Modelacion Matematica y Computacional

    2013-05-01

    Mathematical models of many systems of interest, including very important continuous systems of Earth Sciences and Engineering, lead to a great variety of partial differential equations (PDEs) whose solution methods are based on the computational processing of large-scale algebraic systems. Furthermore, the incredible expansion experienced by the existing computational hardware and software has made amenable to effective treatment problems of an ever increasing diversity and complexity, posed by scientific and engineering applications. Parallel computing is outstanding among the new computational tools and, in order to effectively use the most advanced computers available today, massively parallel software is required. Domain decomposition methods (DDMs) have been developed precisely for effectively treating PDEs in paralle. Ideally, the main objective of domain decomposition research is to produce algorithms capable of 'obtaining the global solution by exclusively solving local problems', but up-to-now this has only been an aspiration; that is, a strong desire for achieving such a property and so we call it 'the DDM-paradigm'. In recent times, numerically competitive DDM-algorithms are non-overlapping, preconditioned and necessarily incorporate constraints which pose an additional challenge for achieving the DDM-paradigm. Recently a group of four algorithms, referred to as the 'DVS-algorithms', which fulfill the DDM-paradigm, was developed. To derive them a new discretization method, which uses a non-overlapping system of nodes (the derived-nodes), was introduced. This discretization procedure can be applied to any boundary-value problem, or system of such equations. In turn, the resulting system of discrete equations can be treated using any available DDM-algorithm. In particular, two of the four DVS-algorithms mentioned above were obtained by application of the well-known and very effective algorithms BDDC and FETI-DP; these will be referred to as the DVS

  9. One algorithm to rule them all? An evaluation and discussion of ten eye movement event-detection algorithms.

    PubMed

    Andersson, Richard; Larsson, Linnea; Holmqvist, Kenneth; Stridh, Martin; Nyström, Marcus

    2017-04-01

    Almost all eye-movement researchers use algorithms to parse raw data and detect distinct types of eye movement events, such as fixations, saccades, and pursuit, and then base their results on these. Surprisingly, these algorithms are rarely evaluated. We evaluated the classifications of ten eye-movement event detection algorithms, on data from an SMI HiSpeed 1250 system, and compared them to manual ratings of two human experts. The evaluation focused on fixations, saccades, and post-saccadic oscillations. The evaluation used both event duration parameters, and sample-by-sample comparisons to rank the algorithms. The resulting event durations varied substantially as a function of what algorithm was used. This evaluation differed from previous evaluations by considering a relatively large set of algorithms, multiple events, and data from both static and dynamic stimuli. The main conclusion is that current detectors of only fixations and saccades work reasonably well for static stimuli, but barely better than chance for dynamic stimuli. Differing results across evaluation methods make it difficult to select one winner for fixation detection. For saccade detection, however, the algorithm by Larsson, Nyström and Stridh (IEEE Transaction on Biomedical Engineering, 60(9):2484-2493,2013) outperforms all algorithms in data from both static and dynamic stimuli. The data also show how improperly selected algorithms applied to dynamic data misestimate fixation and saccade properties.

  10. Stochastic Evolutionary Algorithms for Planning Robot Paths

    NASA Technical Reports Server (NTRS)

    Fink, Wolfgang; Aghazarian, Hrand; Huntsberger, Terrance; Terrile, Richard

    2006-01-01

    A computer program implements stochastic evolutionary algorithms for planning and optimizing collision-free paths for robots and their jointed limbs. Stochastic evolutionary algorithms can be made to produce acceptably close approximations to exact, optimal solutions for path-planning problems while often demanding much less computation than do exhaustive-search and deterministic inverse-kinematics algorithms that have been used previously for this purpose. Hence, the present software is better suited for application aboard robots having limited computing capabilities (see figure). The stochastic aspect lies in the use of simulated annealing to (1) prevent trapping of an optimization algorithm in local minima of an energy-like error measure by which the fitness of a trial solution is evaluated while (2) ensuring that the entire multidimensional configuration and parameter space of the path-planning problem is sampled efficiently with respect to both robot joint angles and computation time. Simulated annealing is an established technique for avoiding local minima in multidimensional optimization problems, but has not, until now, been applied to planning collision-free robot paths by use of low-power computers.

  11. A recurrence-weighted prediction algorithm for musical analysis

    NASA Astrophysics Data System (ADS)

    Colucci, Renato; Leguizamon Cucunuba, Juan Sebastián; Lloyd, Simon

    2018-03-01

    Forecasting the future behaviour of a system using past data is an important topic. In this article we apply nonlinear time series analysis in the context of music, and present new algorithms for extending a sample of music, while maintaining characteristics similar to the original piece. By using ideas from ergodic theory, we adapt the classical prediction method of Lorenz analogues so as to take into account recurrence times, and demonstrate with examples, how the new algorithm can produce predictions with a high degree of similarity to the original sample.

  12. WE-AB-209-06: Dynamic Collimator Trajectory Algorithm for Use in VMAT Treatment Deliveries

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    MacDonald, L; Thomas, C; Syme, A

    2016-06-15

    Purpose: To develop advanced dynamic collimator positioning algorithms for optimal beam’s-eye-view (BEV) fitting of targets in VMAT procedures, including multiple metastases stereotactic radiosurgery procedures. Methods: A trajectory algorithm was developed, which can dynamically modify the angle of the collimator as a function of VMAT control point to provide optimized collimation of target volume(s). Central to this algorithm is a concept denoted “whitespace”, defined as area within the jaw-defined BEV field, outside of the PTV, and not shielded by the MLC when fit to the PTV. Calculating whitespace at all collimator angles and every control point, a two-dimensional topographical map depictingmore » the tightness-of-fit of the MLC was generated. A variety of novel searching algorithms identified a number of candidate trajectories of continuous collimator motion. Ranking these candidate trajectories according to their accrued whitespace value produced an optimal solution for navigation of this map. Results: All trajectories were normalized to minimum possible (i.e. calculated without consideration of collimator motion constraints) accrued whitespace. On an acoustic neuroma case, a random walk algorithm generated a trajectory with 151% whitespace; random walk including a mandatory anchor point improved this to 148%; gradient search produced a trajectory with 137%; and bi-directional gradient search generated a trajectory with 130% whitespace. For comparison, a fixed collimator angle of 30° and 330° accumulated 272% and 228% of whitespace, respectively. The algorithm was tested on a clinical case with two metastases (single isocentre) and identified collimator angles that allow for simultaneous irradiation of the PTVs while minimizing normal tissue irradiation. Conclusion: Dynamic collimator trajectories have the potential to improve VMAT deliveries through increased efficiency and reduced normal tissue dose, especially in treatment of multiple cranial

  13. One-year results of an algorithmic approach to managing failed back surgery syndrome

    PubMed Central

    Avellanal, Martín; Diaz-Reganon, Gonzalo; Orts, Alejandro; Soto, Silvia

    2014-01-01

    BACKGROUND: Failed back surgery syndrome (FBSS) is a major clinical problem. Different etiologies with different incidence rates have been proposed. There are currently no standards regarding the management of these patients. Epiduroscopy is an endoscopic technique that may play a role in the management of FBSS. OBJECTIVE: To evaluate an algorithm for management of severe FBSS including epiduroscopy as a diagnostic and therapeutic tool. METHODS: A total of 133 patients with severe symptoms of FBSS (visual analogue scale score ≥7) and no response to pharmacological treatment and physical therapy were included. A six-step management algorithm was applied. Data, including patient demographics, pain and surgical procedure, were analyzed. In all cases, one or more objective causes of pain were established. Treatment success was defined as ≥50% long-term pain relief maintained during the first year of follow-up. Final allocation of patients was registered: good outcome with conservative treatment, surgical reintervention and palliative treatment with implantable devices. RESULTS: Of 122 patients enrolled, 59.84% underwent instrumented surgery and 40.16% a noninstrumented procedure. Most (64.75%) experienced significant pain relief with conventional pain clinic treatments; 15.57% required surgical treatment. Palliative spinal cord stimulation and spinal analgesia were applied in 9.84% and 2.46% of the cases, respectively. The most common diagnosis was epidural fibrosis, followed by disc herniation, global or lateral stenosis, and foraminal stenosis. CONCLUSIONS: A new six-step ladder approach to severe FBSS management that includes epiduroscopy was analyzed. Etiologies are accurately described and a useful role of epiduroscopy was confirmed. PMID:25222573

  14. Close coupling of pre- and post-processing vision stations using inexact algorithms

    NASA Astrophysics Data System (ADS)

    Shih, Chi-Hsien V.; Sherkat, Nasser; Thomas, Peter D.

    1996-02-01

    Work has been reported using lasers to cut deformable materials. Although the use of laser reduces material deformation, distortion due to mechanical feed misalignment persists. Changes in the lace patten are also caused by the release of tension in the lace structure as it is cut. To tackle the problem of distortion due to material flexibility, the 2VMethod together with the Piecewise Error Compensation Algorithm incorporating the inexact algorithms, i.e., fuzzy logic, neural networks and neural fuzzy technique, are developed. A spring mounted pen is used to emulate the distortion of the lace pattern caused by tactile cutting and feed misalignment. Using pre- and post-processing vision systems, it is possible to monitor the scalloping process and generate on-line information for the artificial intelligence engines. This overcomes the problems of lace distortion due to the trimming process. Applying the algorithms developed, the system can produce excellent results, much better than a human operator.

  15. A fast and accurate online sequential learning algorithm for feedforward networks.

    PubMed

    Liang, Nan-Ying; Huang, Guang-Bin; Saratchandran, P; Sundararajan, N

    2006-11-01

    In this paper, we develop an online sequential learning algorithm for single hidden layer feedforward networks (SLFNs) with additive or radial basis function (RBF) hidden nodes in a unified framework. The algorithm is referred to as online sequential extreme learning machine (OS-ELM) and can learn data one-by-one or chunk-by-chunk (a block of data) with fixed or varying chunk size. The activation functions for additive nodes in OS-ELM can be any bounded nonconstant piecewise continuous functions and the activation functions for RBF nodes can be any integrable piecewise continuous functions. In OS-ELM, the parameters of hidden nodes (the input weights and biases of additive nodes or the centers and impact factors of RBF nodes) are randomly selected and the output weights are analytically determined based on the sequentially arriving data. The algorithm uses the ideas of ELM of Huang et al. developed for batch learning which has been shown to be extremely fast with generalization performance better than other batch training methods. Apart from selecting the number of hidden nodes, no other control parameters have to be manually chosen. Detailed performance comparison of OS-ELM is done with other popular sequential learning algorithms on benchmark problems drawn from the regression, classification and time series prediction areas. The results show that the OS-ELM is faster than the other sequential algorithms and produces better generalization performance.

  16. Optimization, evaluation, and comparison of standard algorithms for image reconstruction with the VIP-PET.

    PubMed

    Mikhaylova, E; Kolstein, M; De Lorenzo, G; Chmeissani, M

    2014-07-01

    A novel positron emission tomography (PET) scanner design based on a room-temperature pixelated CdTe solid-state detector is being developed within the framework of the Voxel Imaging PET (VIP) Pathfinder project [1]. The simulation results show a great potential of the VIP to produce high-resolution images even in extremely challenging conditions such as the screening of a human head [2]. With unprecedented high channel density (450 channels/cm 3 ) image reconstruction is a challenge. Therefore optimization is needed to find the best algorithm in order to exploit correctly the promising detector potential. The following reconstruction algorithms are evaluated: 2-D Filtered Backprojection (FBP), Ordered Subset Expectation Maximization (OSEM), List-Mode OSEM (LM-OSEM), and the Origin Ensemble (OE) algorithm. The evaluation is based on the comparison of a true image phantom with a set of reconstructed images obtained by each algorithm. This is achieved by calculation of image quality merit parameters such as the bias, the variance and the mean square error (MSE). A systematic optimization of each algorithm is performed by varying the reconstruction parameters, such as the cutoff frequency of the noise filters and the number of iterations. The region of interest (ROI) analysis of the reconstructed phantom is also performed for each algorithm and the results are compared. Additionally, the performance of the image reconstruction methods is compared by calculating the modulation transfer function (MTF). The reconstruction time is also taken into account to choose the optimal algorithm. The analysis is based on GAMOS [3] simulation including the expected CdTe and electronic specifics.

  17. Improved algorithms for estimating Total Alkalinity in Northern Gulf of Mexico

    NASA Astrophysics Data System (ADS)

    Devkota, M.; Dash, P.

    2017-12-01

    Ocean Acidification (OA) is one of the serious challenges that have significant impacts on ocean. About 25% of anthropologically generated CO2 is absorbed by the oceans which decreases average ocean pH. This change has critical impacts on marine species, ocean ecology, and associated economics. 35 years of observation concluded that the rate of alteration in OA parameters varies geographically with higher variations in the northern Gulf of Mexico (N-GoM). Several studies have suggested that the Mississippi River affects the carbon dynamics of the N-GoM coastal ecosystem significantly. Total Alkalinity (TA) algorithms developed for major ocean basins produce inaccurate estimations in this region. Hence, a local algorithm to estimate TA is the need for this region, which would incorporate the local effects of oceanographic processes and complex spatial influences. In situ data collected in N-GoM region during the GOMECC-I and II cruises, and GISR Cruises (G-1, 3, 5) from 2007 to 2013 were assimilated and used to calculate the efficiency of the existing TA algorithm that uses Sea Surface Temperature (SST) and Sea Surface Salinity (SSS) as explanatory variables. To improve this algorithm, firstly, statistical analyses were performed to improve the coefficients and the functional form of this algorithm. Then, chlorophyll a (Chl-a) was included as an additional explanatory variable in the multiple linear regression approach in addition to SST and SSS. Based on the average concentration of Chl-a for last 15 years, the N-GoM was divided into two regions, and two separate algorithms were developed for each region. Finally, to address spatial non-stationarity, a Geographically Weighted Regression (GWR) algorithm was developed. The existing TA algorithm resulted considerable algorithm bias with a larger bias in the coastal waters. Chl-a as an additional explanatory variable reduced the bias in the residuals and improved the algorithm efficiency. Chl-a worked as a proxy for

  18. Evaluation and reformulation of the maximum peak height algorithm (MPH) and application in a hypertrophic lagoon

    NASA Astrophysics Data System (ADS)

    Pitarch, Jaime; Ruiz-Verdú, Antonio; Sendra, María. D.; Santoleri, Rosalia

    2017-02-01

    We studied the performance of the MERIS maximum peak height (MPH) algorithm in the retrieval of chlorophyll-a concentration (CHL), using a matchup data set of Bottom-of-Rayleigh Reflectances (BRR) and CHL from a hypertrophic lake (Albufera de Valencia). The MPH algorithm produced a slight underestimation of CHL in the pixels classified as cyanobacteria (83% of the total) and a strong overestimation in those classified as eukaryotic phytoplankton (17%). In situ biomass data showed that the binary classification of MPH was not appropriate for mixed phytoplankton populations, producing also unrealistic discontinuities in the CHL maps. We recalibrated MPH using our matchup data set and found that a single calibration curve of third degree fitted equally well to all matchups regardless of how they were classified. As a modification to the former approach, we incorporated the Phycocyanin Index (PCI) in the formula, thus taking into account the gradient of phytoplankton composition, which reduced the CHL retrieval errors. By using in situ biomass data, we also proved that PCI was indeed an indicator of cyanobacterial dominance. We applied our recalibration of the MPH algorithm to the whole MERIS data set (2002-2012). Results highlight the usefulness of the MPH algorithm as a tool to monitor eutrophication. The relevance of this fact is higher since MPH does not require a complete atmospheric correction, which often fails over such waters. An adequate flagging or correction of sun glint is advisable though, since the MPH algorithm was sensitive to sun glint.

  19. Robust Multi-Frame Adaptive Optics Image Restoration Algorithm Using Maximum Likelihood Estimation with Poisson Statistics.

    PubMed

    Li, Dongming; Sun, Changming; Yang, Jinhua; Liu, Huan; Peng, Jiaqi; Zhang, Lijuan

    2017-04-06

    An adaptive optics (AO) system provides real-time compensation for atmospheric turbulence. However, an AO image is usually of poor contrast because of the nature of the imaging process, meaning that the image contains information coming from both out-of-focus and in-focus planes of the object, which also brings about a loss in quality. In this paper, we present a robust multi-frame adaptive optics image restoration algorithm via maximum likelihood estimation. Our proposed algorithm uses a maximum likelihood method with image regularization as the basic principle, and constructs the joint log likelihood function for multi-frame AO images based on a Poisson distribution model. To begin with, a frame selection method based on image variance is applied to the observed multi-frame AO images to select images with better quality to improve the convergence of a blind deconvolution algorithm. Then, by combining the imaging conditions and the AO system properties, a point spread function estimation model is built. Finally, we develop our iterative solutions for AO image restoration addressing the joint deconvolution issue. We conduct a number of experiments to evaluate the performances of our proposed algorithm. Experimental results show that our algorithm produces accurate AO image restoration results and outperforms the current state-of-the-art blind deconvolution methods.

  20. Robust Multi-Frame Adaptive Optics Image Restoration Algorithm Using Maximum Likelihood Estimation with Poisson Statistics

    PubMed Central

    Li, Dongming; Sun, Changming; Yang, Jinhua; Liu, Huan; Peng, Jiaqi; Zhang, Lijuan

    2017-01-01

    An adaptive optics (AO) system provides real-time compensation for atmospheric turbulence. However, an AO image is usually of poor contrast because of the nature of the imaging process, meaning that the image contains information coming from both out-of-focus and in-focus planes of the object, which also brings about a loss in quality. In this paper, we present a robust multi-frame adaptive optics image restoration algorithm via maximum likelihood estimation. Our proposed algorithm uses a maximum likelihood method with image regularization as the basic principle, and constructs the joint log likelihood function for multi-frame AO images based on a Poisson distribution model. To begin with, a frame selection method based on image variance is applied to the observed multi-frame AO images to select images with better quality to improve the convergence of a blind deconvolution algorithm. Then, by combining the imaging conditions and the AO system properties, a point spread function estimation model is built. Finally, we develop our iterative solutions for AO image restoration addressing the joint deconvolution issue. We conduct a number of experiments to evaluate the performances of our proposed algorithm. Experimental results show that our algorithm produces accurate AO image restoration results and outperforms the current state-of-the-art blind deconvolution methods. PMID:28383503

  1. Basic properties of lattices of cubes, algorithms for their construction, and application capabilities in discrete optimization

    NASA Astrophysics Data System (ADS)

    Khachaturov, R. V.

    2015-01-01

    The basic properties of a new type of lattices—a lattice of cubes—are described. It is shown that, with a suitable choice of union and intersection operations, the set of all subcubes of an N-cube forms a lattice, which is called a lattice of cubes. Algorithms for constructing such lattices are described, and the results produced by these algorithms in the case of lattices of various dimensions are illustrated. It is proved that a lattice of cubes is a lattice with supplements, which makes it possible to minimize and maximize supermodular functions on it. Examples of such functions are given. The possibility of applying previously developed efficient optimization algorithms to the formulation and solution of new classes of problems on lattices of cubes.

  2. A graph-Laplacian-based feature extraction algorithm for neural spike sorting.

    PubMed

    Ghanbari, Yasser; Spence, Larry; Papamichalis, Panos

    2009-01-01

    Analysis of extracellular neural spike recordings is highly dependent upon the accuracy of neural waveform classification, commonly referred to as spike sorting. Feature extraction is an important stage of this process because it can limit the quality of clustering which is performed in the feature space. This paper proposes a new feature extraction method (which we call Graph Laplacian Features, GLF) based on minimizing the graph Laplacian and maximizing the weighted variance. The algorithm is compared with Principal Components Analysis (PCA, the most commonly-used feature extraction method) using simulated neural data. The results show that the proposed algorithm produces more compact and well-separated clusters compared to PCA. As an added benefit, tentative cluster centers are output which can be used to initialize a subsequent clustering stage.

  3. I/O efficient algorithms and applications in geographic information systems

    NASA Astrophysics Data System (ADS)

    Danner, Andrew

    Modern remote sensing methods such a laser altimetry (lidar) and Interferometric Synthetic Aperture Radar (IfSAR) produce georeferenced elevation data at unprecedented rates. Many Geographic Information System (GIS) algorithms designed for terrain modelling applications cannot process these massive data sets. The primary problem is that these data sets are too large to fit in the main internal memory of modern computers and must therefore reside on larger, but considerably slower disks. In these applications, the transfer of data between disk and main memory, or I/O, becomes the primary bottleneck. Working in a theoretical model that more accurately represents this two level memory hierarchy, we can develop algorithms that are I/O-efficient and reduce the amount of disk I/O needed to solve a problem. In this thesis we aim to modernize GIS algorithms and develop a number of I/O-efficient algorithms for processing geographic data derived from massive elevation data sets. For each application, we convert a geographic question to an algorithmic question, develop an I/O-efficient algorithm that is theoretically efficient, implement our approach and verify its performance using real-world data. The applications we consider include constructing a gridded digital elevation model (DEM) from an irregularly spaced point cloud, removing topological noise from a DEM, modeling surface water flow over a terrain, extracting river networks and watershed hierarchies from the terrain, and locating polygons containing query points in a planar subdivision. We initially developed solutions to each of these applications individually. However, we also show how to combine individual solutions to form a scalable geo-processing pipeline that seamlessly solves a sequence of sub-problems with little or no manual intervention. We present experimental results that demonstrate orders of magnitude improvement over previously known algorithms.

  4. Evaluating an image-fusion algorithm with synthetic-image-generation tools

    NASA Astrophysics Data System (ADS)

    Gross, Harry N.; Schott, John R.

    1996-06-01

    An algorithm that combines spectral mixing and nonlinear optimization is used to fuse multiresolution images. Image fusion merges images of different spatial and spectral resolutions to create a high spatial resolution multispectral combination. High spectral resolution allows identification of materials in the scene, while high spatial resolution locates those materials. In this algorithm, conventional spectral mixing estimates the percentage of each material (called endmembers) within each low resolution pixel. Three spectral mixing models are compared; unconstrained, partially constrained, and fully constrained. In the partially constrained application, the endmember fractions are required to sum to one. In the fully constrained application, all fractions are additionally required to lie between zero and one. While negative fractions seem inappropriate, they can arise from random spectral realizations of the materials. In the second part of the algorithm, the low resolution fractions are used as inputs to a constrained nonlinear optimization that calculates the endmember fractions for the high resolution pixels. The constraints mirror the low resolution constraints and maintain consistency with the low resolution fraction results. The algorithm can use one or more higher resolution sharpening images to locate the endmembers to high spatial accuracy. The algorithm was evaluated with synthetic image generation (SIG) tools. A SIG developed image can be used to control the various error sources that are likely to impair the algorithm performance. These error sources include atmospheric effects, mismodeled spectral endmembers, and variability in topography and illumination. By controlling the introduction of these errors, the robustness of the algorithm can be studied and improved upon. The motivation for this research is to take advantage of the next generation of multi/hyperspectral sensors. Although the hyperspectral images will be of modest to low resolution

  5. Complexity transitions in global algorithms for sparse linear systems over finite fields

    NASA Astrophysics Data System (ADS)

    Braunstein, A.; Leone, M.; Ricci-Tersenghi, F.; Zecchina, R.

    2002-09-01

    We study the computational complexity of a very basic problem, namely that of finding solutions to a very large set of random linear equations in a finite Galois field modulo q. Using tools from statistical mechanics we are able to identify phase transitions in the structure of the solution space and to connect them to the changes in the performance of a global algorithm, namely Gaussian elimination. Crossing phase boundaries produces a dramatic increase in memory and CPU requirements necessary for the algorithms. In turn, this causes the saturation of the upper bounds for the running time. We illustrate the results on the specific problem of integer factorization, which is of central interest for deciphering messages encrypted with the RSA cryptosystem.

  6. Optimization of laminated stacking sequence for buckling load maximization by genetic algorithm

    NASA Technical Reports Server (NTRS)

    Le Riche, Rodolphe; Haftka, Raphael T.

    1992-01-01

    The use of a genetic algorithm to optimize the stacking sequence of a composite laminate for buckling load maximization is studied. Various genetic parameters including the population size, the probability of mutation, and the probability of crossover are optimized by numerical experiments. A new genetic operator - permutation - is proposed and shown to be effective in reducing the cost of the genetic search. Results are obtained for a graphite-epoxy plate, first when only the buckling load is considered, and then when constraints on ply contiguity and strain failure are added. The influence on the genetic search of the penalty parameter enforcing the contiguity constraint is studied. The advantage of the genetic algorithm in producing several near-optimal designs is discussed.

  7. A comparative intelligibility study of single-microphone noise reduction algorithms.

    PubMed

    Hu, Yi; Loizou, Philipos C

    2007-09-01

    The evaluation of intelligibility of noise reduction algorithms is reported. IEEE sentences and consonants were corrupted by four types of noise including babble, car, street and train at two signal-to-noise ratio levels (0 and 5 dB), and then processed by eight speech enhancement methods encompassing four classes of algorithms: spectral subtractive, sub-space, statistical model based and Wiener-type algorithms. The enhanced speech was presented to normal-hearing listeners for identification. With the exception of a single noise condition, no algorithm produced significant improvements in speech intelligibility. Information transmission analysis of the consonant confusion matrices indicated that no algorithm improved significantly the place feature score, significantly, which is critically important for speech recognition. The algorithms which were found in previous studies to perform the best in terms of overall quality, were not the same algorithms that performed the best in terms of speech intelligibility. The subspace algorithm, for instance, was previously found to perform the worst in terms of overall quality, but performed well in the present study in terms of preserving speech intelligibility. Overall, the analysis of consonant confusion matrices suggests that in order for noise reduction algorithms to improve speech intelligibility, they need to improve the place and manner feature scores.

  8. Prediction of Aerodynamic Coefficients for Wind Tunnel Data using a Genetic Algorithm Optimized Neural Network

    NASA Technical Reports Server (NTRS)

    Rajkumar, T.; Aragon, Cecilia; Bardina, Jorge; Britten, Roy

    2002-01-01

    A fast, reliable way of predicting aerodynamic coefficients is produced using a neural network optimized by a genetic algorithm. Basic aerodynamic coefficients (e.g. lift, drag, pitching moment) are modelled as functions of angle of attack and Mach number. The neural network is first trained on a relatively rich set of data from wind tunnel tests of numerical simulations to learn an overall model. Most of the aerodynamic parameters can be well-fitted using polynomial functions. A new set of data, which can be relatively sparse, is then supplied to the network to produce a new model consistent with the previous model and the new data. Because the new model interpolates realistically between the sparse test data points, it is suitable for use in piloted simulations. The genetic algorithm is used to choose a neural network architecture to give best results, avoiding over-and under-fitting of the test data.

  9. Decryption of pure-position permutation algorithms.

    PubMed

    Zhao, Xiao-Yu; Chen, Gang; Zhang, Dan; Wang, Xiao-Hong; Dong, Guang-Chang

    2004-07-01

    Pure position permutation image encryption algorithms, commonly used as image encryption investigated in this work are unfortunately frail under known-text attack. In view of the weakness of pure position permutation algorithm, we put forward an effective decryption algorithm for all pure-position permutation algorithms. First, a summary of the pure position permutation image encryption algorithms is given by introducing the concept of ergodic matrices. Then, by using probability theory and algebraic principles, the decryption probability of pure-position permutation algorithms is verified theoretically; and then, by defining the operation system of fuzzy ergodic matrices, we improve a specific decryption algorithm. Finally, some simulation results are shown.

  10. Does provider adherence to a treatment guideline change clinical outcomes for patients with bipolar disorder? Results from the Texas Medication Algorithm Project.

    PubMed

    Dennehy, Ellen B; Suppes, Trisha; Rush, A John; Miller, Alexander L; Trivedi, Madhukar H; Crismon, M Lynn; Carmody, Thomas J; Kashner, T Michael

    2005-12-01

    Despite increasing adoption of clinical practice guidelines in psychiatry, there is little measurement of provider implementation of these recommendations, and the resulting impact on clinical outcomes. The current study describes one effort to measure these relationships in a cohort of public sector out-patients with bipolar disorder. Participants were enrolled in the algorithm intervention of the Texas Medication Algorithm Project (TMAP). Study methods and the adherence scoring algorithm have been described elsewhere. The current paper addresses the relationships between patient characteristics, provider experience with the algorithm, provider adherence, and clinical outcomes. Measurement of provider adherence includes evaluation of visit frequency, medication choice and dosing, and response to patient symptoms. An exploratory composite 'adherence by visit' score was developed for these analyses. A total of 1948 visits from 141 subjects were evaluated, and utilized a two-stage declining effects model. Providers with more experience using the algorithm tended to adhere less to treatment recommendations. Few patient factors significantly impacted provider adherence. Increased adherence to algorithm recommendations was associated with larger decreases in overall psychiatric symptoms and depressive symptoms over time, but did not impact either immediate or long-term reductions in manic symptoms. Greater provider adherence to treatment guideline recommendations was associated with greater reductions in depressive symptoms and overall psychiatric symptoms over time. Additional research is needed to refine measurement and to further clarify these relationships.

  11. Power of automated algorithms for combining time-line follow-back and urine drug screening test results in stimulant-abuse clinical trials.

    PubMed

    Oden, Neal L; VanVeldhuisen, Paul C; Wakim, Paul G; Trivedi, Madhukar H; Somoza, Eugene; Lewis, Daniel

    2011-09-01

    In clinical trials of treatment for stimulant abuse, researchers commonly record both Time-Line Follow-Back (TLFB) self-reports and urine drug screen (UDS) results. To compare the power of self-report, qualitative (use vs. no use) UDS assessment, and various algorithms to generate self-report-UDS composite measures to detect treatment differences via t-test in simulated clinical trial data. We performed Monte Carlo simulations patterned in part on real data to model self-report reliability, UDS errors, dropout, informatively missing UDS reports, incomplete adherence to a urine donation schedule, temporal correlation of drug use, number of days in the study period, number of patients per arm, and distribution of drug-use probabilities. Investigated algorithms include maximum likelihood and Bayesian estimates, self-report alone, UDS alone, and several simple modifications of self-report (referred to here as ELCON algorithms) which eliminate perceived contradictions between it and UDS. Among the algorithms investigated, simple ELCON algorithms gave rise to the most powerful t-tests to detect mean group differences in stimulant drug use. Further investigation is needed to determine if simple, naïve procedures such as the ELCON algorithms are optimal for comparing clinical study treatment arms. But researchers who currently require an automated algorithm in scenarios similar to those simulated for combining TLFB and UDS to test group differences in stimulant use should consider one of the ELCON algorithms. This analysis continues a line of inquiry which could determine how best to measure outpatient stimulant use in clinical trials (NIDA. NIDA Monograph-57: Self-Report Methods of Estimating Drug Abuse: Meeting Current Challenges to Validity. NTIS PB 88248083. Bethesda, MD: National Institutes of Health, 1985; NIDA. NIDA Research Monograph 73: Urine Testing for Drugs of Abuse. NTIS PB 89151971. Bethesda, MD: National Institutes of Health, 1987; NIDA. NIDA Research

  12. Study on data compression algorithm and its implementation in portable electronic device for Internet of Things applications

    NASA Astrophysics Data System (ADS)

    Asilah Khairi, Nor; Bahari Jambek, Asral

    2017-11-01

    An Internet of Things (IoT) device is usually powered by a small battery, which does not last long. As a result, saving energy in IoT devices has become an important issue when it comes to this subject. Since power consumption is the primary cause of radio communication, some researchers have proposed several compression algorithms with the purpose of overcoming this particular problem. Several data compression algorithms from previous reference papers are discussed in this paper. The description of the compression algorithm in the reference papers was collected and summarized in a table form. From the analysis, MAS compression algorithm was selected as a project prototype due to its high potential for meeting the project requirements. Besides that, it also produced better performance regarding energy-saving, better memory usage, and data transmission efficiency. This method is also suitable to be implemented in WSN. MAS compression algorithm will be prototyped and applied in portable electronic devices for Internet of Things applications.

  13. Adaptive reference update (ARU) algorithm. A stochastic search algorithm for efficient optimization of multi-drug cocktails

    PubMed Central

    2012-01-01

    Background Multi-target therapeutics has been shown to be effective for treating complex diseases, and currently, it is a common practice to combine multiple drugs to treat such diseases to optimize the therapeutic outcomes. However, considering the huge number of possible ways to mix multiple drugs at different concentrations, it is practically difficult to identify the optimal drug combination through exhaustive testing. Results In this paper, we propose a novel stochastic search algorithm, called the adaptive reference update (ARU) algorithm, that can provide an efficient and systematic way for optimizing multi-drug cocktails. The ARU algorithm iteratively updates the drug combination to improve its response, where the update is made by comparing the response of the current combination with that of a reference combination, based on which the beneficial update direction is predicted. The reference combination is continuously updated based on the drug response values observed in the past, thereby adapting to the underlying drug response function. To demonstrate the effectiveness of the proposed algorithm, we evaluated its performance based on various multi-dimensional drug functions and compared it with existing algorithms. Conclusions Simulation results show that the ARU algorithm significantly outperforms existing stochastic search algorithms, including the Gur Game algorithm. In fact, the ARU algorithm can more effectively identify potent drug combinations and it typically spends fewer iterations for finding effective combinations. Furthermore, the ARU algorithm is robust to random fluctuations and noise in the measured drug response, which makes the algorithm well-suited for practical drug optimization applications. PMID:23134742

  14. Response to "Comparison and Evaluation of Clustering Algorithms for Tandem Mass Spectra".

    PubMed

    Griss, Johannes; Perez-Riverol, Yasset; The, Matthew; Käll, Lukas; Vizcaíno, Juan Antonio

    2018-05-04

    In the recent benchmarking article entitled "Comparison and Evaluation of Clustering Algorithms for Tandem Mass Spectra", Rieder et al. compared several different approaches to cluster MS/MS spectra. While we certainly recognize the value of the manuscript, here, we report some shortcomings detected in the original analyses. For most analyses, the authors clustered only single MS/MS runs. In one of the reported analyses, three MS/MS runs were processed together, which already led to computational performance issues in many of the tested approaches. This fact highlights the difficulties of using many of the tested algorithms on the nowadays produced average proteomics data sets. Second, the authors only processed identified spectra when merging MS runs. Thereby, all unidentified spectra that are of lower quality were already removed from the data set and could not influence the clustering results. Next, we found that the authors did not analyze the effect of chimeric spectra on the clustering results. In our analysis, we found that 3% of the spectra in the used data sets were chimeric, and this had marked effects on the behavior of the different clustering algorithms tested. Finally, the authors' choice to evaluate the MS-Cluster and spectra-cluster algorithms using a precursor tolerance of 5 Da for high-resolution Orbitrap data only was, in our opinion, not adequate to assess the performance of MS/MS clustering approaches.

  15. Algorithmic and user study of an autocompletion algorithm on a large medical vocabulary.

    PubMed

    Sevenster, Merlijn; van Ommering, Rob; Qian, Yuechen

    2012-02-01

    Autocompletion supports human-computer interaction in software applications that let users enter textual data. We will be inspired by the use case in which medical professionals enter ontology concepts, catering the ongoing demand for structured and standardized data in medicine. Goal is to give an algorithmic analysis of one particular autocompletion algorithm, called multi-prefix matching algorithm, which suggests terms whose words' prefixes contain all words in the string typed by the user, e.g., in this sense, opt ner me matches optic nerve meningioma. Second we aim to investigate how well it supports users entering concepts from a large and comprehensive medical vocabulary (snomed ct). We give a concise description of the multi-prefix algorithm, and sketch how it can be optimized to meet required response time. Performance will be compared to a baseline algorithm, which gives suggestions that extend the string typed by the user to the right, e.g. optic nerve m gives optic nerve meningioma, but opt ner me does not. We conduct a user experiment in which 12 participants are invited to complete 40 snomed ct terms with the baseline algorithm and another set of 40 snomed ct terms with the multi-prefix algorithm. Our results show that users need significantly fewer keystrokes when supported by the multi-prefix algorithm than when supported by the baseline algorithm. The proposed algorithm is a competitive candidate for searching and retrieving terms from a large medical ontology. Copyright © 2011 Elsevier Inc. All rights reserved.

  16. A method for data handling numerical results in parallel OpenFOAM simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Anton, Alin; Muntean, Sebastian

    Parallel computational fluid dynamics simulations produce vast amount of numerical result data. This paper introduces a method for reducing the size of the data by replaying the interprocessor traffic. The results are recovered only in certain regions of interest configured by the user. A known test case is used for several mesh partitioning scenarios using the OpenFOAM toolkit{sup ®}[1]. The space savings obtained with classic algorithms remain constant for more than 60 Gb of floating point data. Our method is most efficient on large simulation meshes and is much better suited for compressing large scale simulation results than the regular algorithms.

  17. a New Graduation Algorithm for Color Balance of Remote Sensing Image

    NASA Astrophysics Data System (ADS)

    Zhou, G.; Liu, X.; Yue, T.; Wang, Q.; Sha, H.; Huang, S.; Pan, Q.

    2018-05-01

    In order to expand the field of view to obtain more data and information when doing research on remote sensing image, workers always need to mosaicking images together. However, the image after mosaic always has the large color differences and produces the gap line. This paper based on the graduation algorithm of tarigonometric function proposed a new algorithm of Two Quarter-rounds Curves (TQC). The paper uses the Gaussian filter to solve the program about the image color noise and the gap line. The paper used one of Greenland compiled data acquired in 1963 from Declassified Intelligence Photography Project (DISP) by ARGON KH-5 satellite, and used the photography of North Gulf, China, by Landsat satellite to experiment. The experimental results show that the proposed method has improved the accuracy of the results in two parts: on the one hand, for the large color differences remote sensing image will become more balanced. On the other hands, the remote sensing image will achieve more smooth transition.

  18. Validation of MERIS Ocean Color Algorithms in the Mediterranean Sea

    NASA Astrophysics Data System (ADS)

    Marullo, S.; D'Ortenzio, F.; Ribera D'Alcalà, M.; Ragni, M.; Santoleri, R.; Vellucci, V.; Luttazzi, C.

    2004-05-01

    evaluation of the DORMA algorithms with data not used for the algorithms empirical coefficient retrieval and an extensive validation of the new MERIS algorithm. Up to now two possible explanations of the fact that the oligotrophic waters of the Mediterranean Sea are greener than would result from their phytoplankton content alone have been proposed. Claustre et al. [8] suggested that the observed distortion effect on the blue to green ratio can be due to the presence of Saharan dust in the upper layers that enhance absorption in the blue and backscattering in the green. DOR2002 also proposed a tentative explanation of the observed distortion effect based on the hypothesis that enhanced CaCO3 concentration, due to relative abundance of coccolithophores in the oligotrophic waters of the Mediterranean Sea, can cause similar results. In this note, we show a first analysis of the performances of two regional Mediterranean OC algorithms based on SeaWiFS bands (the NL-DORMA and Bricaud et al. 2002 algorithms), the standard OC4v4 NASA algorithm and the MERIS algorithm currently used to produce chlorophyll maps for case I waters. Finally we revisit the Claustre et al. (2002) and DOR2002 hypothesis in the light of new measurements recently published by Malinverno et al. [9] and the results of the ADIOS project of EC.

  19. Genetic Algorithm Approaches to Prebiobiotic Chemistry Modeling

    NASA Technical Reports Server (NTRS)

    Lohn, Jason; Colombano, Silvano

    1997-01-01

    We model an artificial chemistry comprised of interacting polymers by specifying two initial conditions: a distribution of polymers and a fixed set of reversible catalytic reactions. A genetic algorithm is used to find a set of reactions that exhibit a desired dynamical behavior. Such a technique is useful because it allows an investigator to determine whether a specific pattern of dynamics can be produced, and if it can, the reaction network found can be then analyzed. We present our results in the context of studying simplified chemical dynamics in theorized protocells - hypothesized precursors of the first living organisms. Our results show that given a small sample of plausible protocell reaction dynamics, catalytic reaction sets can be found. We present cases where this is not possible and also analyze the evolved reaction sets.

  20. The algorithms for rational spline interpolation of surfaces

    NASA Technical Reports Server (NTRS)

    Schiess, J. R.

    1986-01-01

    Two algorithms for interpolating surfaces with spline functions containing tension parameters are discussed. Both algorithms are based on the tensor products of univariate rational spline functions. The simpler algorithm uses a single tension parameter for the entire surface. This algorithm is generalized to use separate tension parameters for each rectangular subregion. The new algorithm allows for local control of tension on the interpolating surface. Both algorithms are illustrated and the results are compared with the results of bicubic spline and bilinear interpolation of terrain elevation data.

  1. Explicit high-order non-canonical symplectic particle-in-cell algorithms for Vlasov-Maxwell systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xiao, Jianyuan; Qin, Hong; Liu, Jian

    2015-11-01

    Explicit high-order non-canonical symplectic particle-in-cell algorithms for classical particle-field systems governed by the Vlasov-Maxwell equations are developed. The algorithms conserve a discrete non-canonical symplectic structure derived from the Lagrangian of the particle-field system, which is naturally discrete in particles. The electromagnetic field is spatially discretized using the method of discrete exterior calculus with high-order interpolating differential forms for a cubic grid. The resulting time-domain Lagrangian assumes a non-canonical symplectic structure. It is also gauge invariant and conserves charge. The system is then solved using a structure-preserving splitting method discovered by He et al. [preprint arXiv: 1505.06076 (2015)], which produces fivemore » exactly soluble sub-systems, and high-order structure-preserving algorithms follow by combinations. The explicit, high-order, and conservative nature of the algorithms is especially suitable for long-term simulations of particle-field systems with extremely large number of degrees of freedom on massively parallel supercomputers. The algorithms have been tested and verified by the two physics problems, i.e., the nonlinear Landau damping and the electron Bernstein wave. (C) 2015 AIP Publishing LLC.« less

  2. Soft learning vector quantization and clustering algorithms based on ordered weighted aggregation operators.

    PubMed

    Karayiannis, N B

    2000-01-01

    This paper presents the development and investigates the properties of ordered weighted learning vector quantization (LVQ) and clustering algorithms. These algorithms are developed by using gradient descent to minimize reformulation functions based on aggregation operators. An axiomatic approach provides conditions for selecting aggregation operators that lead to admissible reformulation functions. Minimization of admissible reformulation functions based on ordered weighted aggregation operators produces a family of soft LVQ and clustering algorithms, which includes fuzzy LVQ and clustering algorithms as special cases. The proposed LVQ and clustering algorithms are used to perform segmentation of magnetic resonance (MR) images of the brain. The diagnostic value of the segmented MR images provides the basis for evaluating a variety of ordered weighted LVQ and clustering algorithms.

  3. Automated Development of Accurate Algorithms and Efficient Codes for Computational Aeroacoustics

    NASA Technical Reports Server (NTRS)

    Goodrich, John W.; Dyson, Rodger W.

    1999-01-01

    The simulation of sound generation and propagation in three space dimensions with realistic aircraft components is a very large time dependent computation with fine details. Simulations in open domains with embedded objects require accurate and robust algorithms for propagation, for artificial inflow and outflow boundaries, and for the definition of geometrically complex objects. The development, implementation, and validation of methods for solving these demanding problems is being done to support the NASA pillar goals for reducing aircraft noise levels. Our goal is to provide algorithms which are sufficiently accurate and efficient to produce usable results rapidly enough to allow design engineers to study the effects on sound levels of design changes in propulsion systems, and in the integration of propulsion systems with airframes. There is a lack of design tools for these purposes at this time. Our technical approach to this problem combines the development of new, algorithms with the use of Mathematica and Unix utilities to automate the algorithm development, code implementation, and validation. We use explicit methods to ensure effective implementation by domain decomposition for SPMD parallel computing. There are several orders of magnitude difference in the computational efficiencies of the algorithms which we have considered. We currently have new artificial inflow and outflow boundary conditions that are stable, accurate, and unobtrusive, with implementations that match the accuracy and efficiency of the propagation methods. The artificial numerical boundary treatments have been proven to have solutions which converge to the full open domain problems, so that the error from the boundary treatments can be driven as low as is required. The purpose of this paper is to briefly present a method for developing highly accurate algorithms for computational aeroacoustics, the use of computer automation in this process, and a brief survey of the algorithms that

  4. Performance analysis of a dual-tree algorithm for computing spatial distance histograms

    PubMed Central

    Chen, Shaoping; Tu, Yi-Cheng; Xia, Yuni

    2011-01-01

    Many scientific and engineering fields produce large volume of spatiotemporal data. The storage, retrieval, and analysis of such data impose great challenges to database systems design. Analysis of scientific spatiotemporal data often involves computing functions of all point-to-point interactions. One such analytics, the Spatial Distance Histogram (SDH), is of vital importance to scientific discovery. Recently, algorithms for efficient SDH processing in large-scale scientific databases have been proposed. These algorithms adopt a recursive tree-traversing strategy to process point-to-point distances in the visited tree nodes in batches, thus require less time when compared to the brute-force approach where all pairwise distances have to be computed. Despite the promising experimental results, the complexity of such algorithms has not been thoroughly studied. In this paper, we present an analysis of such algorithms based on a geometric modeling approach. The main technique is to transform the analysis of point counts into a problem of quantifying the area of regions where pairwise distances can be processed in batches by the algorithm. From the analysis, we conclude that the number of pairwise distances that are left to be processed decreases exponentially with more levels of the tree visited. This leads to the proof of a time complexity lower than the quadratic time needed for a brute-force algorithm and builds the foundation for a constant-time approximate algorithm. Our model is also general in that it works for a wide range of point spatial distributions, histogram types, and space-partitioning options in building the tree. PMID:21804753

  5. GIFTS SM EDU Data Processing and Algorithms

    NASA Technical Reports Server (NTRS)

    Tian, Jialin; Johnson, David G.; Reisse, Robert A.; Gazarik, Michael J.

    2007-01-01

    The Geosynchronous Imaging Fourier Transform Spectrometer (GIFTS) Sensor Module (SM) Engineering Demonstration Unit (EDU) is a high resolution spectral imager designed to measure infrared (IR) radiances using a Fourier transform spectrometer (FTS). The GIFTS instrument employs three Focal Plane Arrays (FPAs), which gather measurements across the long-wave IR (LWIR), short/mid-wave IR (SMWIR), and visible spectral bands. The raw interferogram measurements are radiometrically and spectrally calibrated to produce radiance spectra, which are further processed to obtain atmospheric profiles via retrieval algorithms. This paper describes the processing algorithms involved in the calibration stage. The calibration procedures can be subdivided into three stages. In the pre-calibration stage, a phase correction algorithm is applied to the decimated and filtered complex interferogram. The resulting imaginary part of the spectrum contains only the noise component of the uncorrected spectrum. Additional random noise reduction can be accomplished by applying a spectral smoothing routine to the phase-corrected blackbody reference spectra. In the radiometric calibration stage, we first compute the spectral responsivity based on the previous results, from which, the calibrated ambient blackbody (ABB), hot blackbody (HBB), and scene spectra can be obtained. During the post-processing stage, we estimate the noise equivalent spectral radiance (NESR) from the calibrated ABB and HBB spectra. We then implement a correction scheme that compensates for the effect of fore-optics offsets. Finally, for off-axis pixels, the FPA off-axis effects correction is performed. To estimate the performance of the entire FPA, we developed an efficient method of generating pixel performance assessments. In addition, a random pixel selection scheme is designed based on the pixel performance evaluation.

  6. GIFTS SM EDU Level 1B Algorithms

    NASA Technical Reports Server (NTRS)

    Tian, Jialin; Gazarik, Michael J.; Reisse, Robert A.; Johnson, David G.

    2007-01-01

    The Geosynchronous Imaging Fourier Transform Spectrometer (GIFTS) SensorModule (SM) Engineering Demonstration Unit (EDU) is a high resolution spectral imager designed to measure infrared (IR) radiances using a Fourier transform spectrometer (FTS). The GIFTS instrument employs three focal plane arrays (FPAs), which gather measurements across the long-wave IR (LWIR), short/mid-wave IR (SMWIR), and visible spectral bands. The raw interferogram measurements are radiometrically and spectrally calibrated to produce radiance spectra, which are further processed to obtain atmospheric profiles via retrieval algorithms. This paper describes the GIFTS SM EDU Level 1B algorithms involved in the calibration. The GIFTS Level 1B calibration procedures can be subdivided into four blocks. In the first block, the measured raw interferograms are first corrected for the detector nonlinearity distortion, followed by the complex filtering and decimation procedure. In the second block, a phase correction algorithm is applied to the filtered and decimated complex interferograms. The resulting imaginary part of the spectrum contains only the noise component of the uncorrected spectrum. Additional random noise reduction can be accomplished by applying a spectral smoothing routine to the phase-corrected spectrum. The phase correction and spectral smoothing operations are performed on a set of interferogram scans for both ambient and hot blackbody references. To continue with the calibration, we compute the spectral responsivity based on the previous results, from which, the calibrated ambient blackbody (ABB), hot blackbody (HBB), and scene spectra can be obtained. We now can estimate the noise equivalent spectral radiance (NESR) from the calibrated ABB and HBB spectra. The correction schemes that compensate for the fore-optics offsets and off-axis effects are also implemented. In the third block, we developed an efficient method of generating pixel performance assessments. In addition, a

  7. STAR Algorithm Integration Team - Facilitating operational algorithm development

    NASA Astrophysics Data System (ADS)

    Mikles, V. J.

    2015-12-01

    The NOAA/NESDIS Center for Satellite Research and Applications (STAR) provides technical support of the Joint Polar Satellite System (JPSS) algorithm development and integration tasks. Utilizing data from the S-NPP satellite, JPSS generates over thirty Environmental Data Records (EDRs) and Intermediate Products (IPs) spanning atmospheric, ocean, cryosphere, and land weather disciplines. The Algorithm Integration Team (AIT) brings technical expertise and support to product algorithms, specifically in testing and validating science algorithms in a pre-operational environment. The AIT verifies that new and updated algorithms function in the development environment, enforces established software development standards, and ensures that delivered packages are functional and complete. AIT facilitates the development of new JPSS-1 algorithms by implementing a review approach based on the Enterprise Product Lifecycle (EPL) process. Building on relationships established during the S-NPP algorithm development process and coordinating directly with science algorithm developers, the AIT has implemented structured reviews with self-contained document suites. The process has supported algorithm improvements for products such as ozone, active fire, vegetation index, and temperature and moisture profiles.

  8. A biological phantom for evaluation of CT image reconstruction algorithms

    NASA Astrophysics Data System (ADS)

    Cammin, J.; Fung, G. S. K.; Fishman, E. K.; Siewerdsen, J. H.; Stayman, J. W.; Taguchi, K.

    2014-03-01

    In recent years, iterative algorithms have become popular in diagnostic CT imaging to reduce noise or radiation dose to the patient. The non-linear nature of these algorithms leads to non-linearities in the imaging chain. However, the methods to assess the performance of CT imaging systems were developed assuming the linear process of filtered backprojection (FBP). Those methods may not be suitable any longer when applied to non-linear systems. In order to evaluate the imaging performance, a phantom is typically scanned and the image quality is measured using various indices. For reasons of practicality, cost, and durability, those phantoms often consist of simple water containers with uniform cylinder inserts. However, these phantoms do not represent the rich structure and patterns of real tissue accurately. As a result, the measured image quality or detectability performance for lesions may not reflect the performance on clinical images. The discrepancy between estimated and real performance may be even larger for iterative methods which sometimes produce "plastic-like", patchy images with homogeneous patterns. Consequently, more realistic phantoms should be used to assess the performance of iterative algorithms. We designed and constructed a biological phantom consisting of porcine organs and tissue that models a human abdomen, including liver lesions. We scanned the phantom on a clinical CT scanner and compared basic image quality indices between filtered backprojection and an iterative reconstruction algorithm.

  9. A hybrid artificial bee colony algorithm and pattern search method for inversion of particle size distribution from spectral extinction data

    NASA Astrophysics Data System (ADS)

    Wang, Li; Li, Feng; Xing, Jian

    2017-10-01

    In this paper, a hybrid artificial bee colony (ABC) algorithm and pattern search (PS) method is proposed and applied for recovery of particle size distribution (PSD) from spectral extinction data. To be more useful and practical, size distribution function is modelled as the general Johnson's ? function that can overcome the difficulty of not knowing the exact type beforehand encountered in many real circumstances. The proposed hybrid algorithm is evaluated through simulated examples involving unimodal, bimodal and trimodal PSDs with different widths and mean particle diameters. For comparison, all examples are additionally validated by the single ABC algorithm. In addition, the performance of the proposed algorithm is further tested by actual extinction measurements with real standard polystyrene samples immersed in water. Simulation and experimental results illustrate that the hybrid algorithm can be used as an effective technique to retrieve the PSDs with high reliability and accuracy. Compared with the single ABC algorithm, our proposed algorithm can produce more accurate and robust inversion results while taking almost comparative CPU time over ABC algorithm alone. The superiority of ABC and PS hybridization strategy in terms of reaching a better balance of estimation accuracy and computation effort increases its potentials as an excellent inversion technique for reliable and efficient actual measurement of PSD.

  10. Self-Reported HIV-Positive Status But Subsequent HIV-Negative Test Result Using Rapid Diagnostic Testing Algorithms Among Seven Sub-Saharan African Military Populations

    DTIC Science & Technology

    2017-07-07

    RESEARCH ARTICLE Self-reported HIV-positive status but subsequent HIV-negative test result using rapid diagnostic testing algorithms among seven sub...America * judith.harbertson.ctr@mail.mil Abstract HIV rapid diagnostic tests (RDTs) combined in an algorithm are the current standard for HIV diagnosis...in many sub-Saharan African countries, and extensive laboratory testing has con- firmed HIV RDTs have excellent sensitivity and specificity. However

  11. An Algorithm of Association Rule Mining for Microbial Energy Prospection

    PubMed Central

    Shaheen, Muhammad; Shahbaz, Muhammad

    2017-01-01

    The presence of hydrocarbons beneath earth’s surface produces some microbiological anomalies in soils and sediments. The detection of such microbial populations involves pure bio chemical processes which are specialized, expensive and time consuming. This paper proposes a new algorithm of context based association rule mining on non spatial data. The algorithm is a modified form of already developed algorithm which was for spatial database only. The algorithm is applied to mine context based association rules on microbial database to extract interesting and useful associations of microbial attributes with existence of hydrocarbon reserve. The surface and soil manifestations caused by the presence of hydrocarbon oxidizing microbes are selected from existing literature and stored in a shared database. The algorithm is applied on the said database to generate direct and indirect associations among the stored microbial indicators. These associations are then correlated with the probability of hydrocarbon’s existence. The numerical evaluation shows better accuracy for non-spatial data as compared to conventional algorithms at generating reliable and robust rules. PMID:28393846

  12. The Texas Medication Algorithm Project antipsychotic algorithm for schizophrenia: 2006 update.

    PubMed

    Moore, Troy A; Buchanan, Robert W; Buckley, Peter F; Chiles, John A; Conley, Robert R; Crismon, M Lynn; Essock, Susan M; Finnerty, Molly; Marder, Stephen R; Miller, Del D; McEvoy, Joseph P; Robinson, Delbert G; Schooler, Nina R; Shon, Steven P; Stroup, T Scott; Miller, Alexander L

    2007-11-01

    A panel of academic psychiatrists and pharmacists, clinicians from the Texas public mental health system, advocates, and consumers met in June 2006 in Dallas, Tex., to review recent evidence in the pharmacologic treatment of schizophrenia. The goal of the consensus conference was to update and revise the Texas Medication Algorithm Project (TMAP) algorithm for schizophrenia used in the Texas Implementation of Medication Algorithms, a statewide quality assurance program for treatment of major psychiatric illness. Four questions were identified via premeeting teleconferences. (1) Should antipsychotic treatment of first-episode schizophrenia be different from that of multiepisode schizophrenia? (2) In which algorithm stages should first-generation antipsychotics (FGAs) be an option? (3) How many antipsychotic trials should precede a clozapine trial? (4) What is the status of augmentation strategies for clozapine? Subgroups reviewed the evidence in each area and presented their findings at the conference. The algorithm was updated to incorporate the following recommendations. (1) Persons with first-episode schizophrenia typically require lower antipsychotic doses and are more sensitive to side effects such as weight gain and extrapyramidal symptoms (group consensus). Second-generation antipsychotics (SGAs) are preferred for treatment of first-episode schizophrenia (majority opinion). (2) FGAs should be included in algorithm stages after first episode that include SGAs other than clozapine as options (group consensus). (3) The recommended number of trials of other antipsychotics that should precede a clozapine trial is 2, but earlier use of clozapine should be considered in the presence of persistent problems such as suicidality, comorbid violence, and substance abuse (group consensus). (4) Augmentation is reasonable for persons with inadequate response to clozapine, but published results on augmenting agents have not identified replicable positive results (group

  13. High-speed parallel implementation of a modified PBR algorithm on DSP-based EH topology

    NASA Astrophysics Data System (ADS)

    Rajan, K.; Patnaik, L. M.; Ramakrishna, J.

    1997-08-01

    Algebraic Reconstruction Technique (ART) is an age-old method used for solving the problem of three-dimensional (3-D) reconstruction from projections in electron microscopy and radiology. In medical applications, direct 3-D reconstruction is at the forefront of investigation. The simultaneous iterative reconstruction technique (SIRT) is an ART-type algorithm with the potential of generating in a few iterations tomographic images of a quality comparable to that of convolution backprojection (CBP) methods. Pixel-based reconstruction (PBR) is similar to SIRT reconstruction, and it has been shown that PBR algorithms give better quality pictures compared to those produced by SIRT algorithms. In this work, we propose a few modifications to the PBR algorithms. The modified algorithms are shown to give better quality pictures compared to PBR algorithms. The PBR algorithm and the modified PBR algorithms are highly compute intensive, Not many attempts have been made to reconstruct objects in the true 3-D sense because of the high computational overhead. In this study, we have developed parallel two-dimensional (2-D) and 3-D reconstruction algorithms based on modified PBR. We attempt to solve the two problems encountered by the PBR and modified PBR algorithms, i.e., the long computational time and the large memory requirements, by parallelizing the algorithm on a multiprocessor system. We investigate the possible task and data partitioning schemes by exploiting the potential parallelism in the PBR algorithm subject to minimizing the memory requirement. We have implemented an extended hypercube (EH) architecture for the high-speed execution of the 3-D reconstruction algorithm using the commercially available fast floating point digital signal processor (DSP) chips as the processing elements (PEs) and dual-port random access memories (DPR) as channels between the PEs. We discuss and compare the performances of the PBR algorithm on an IBM 6000 RISC workstation, on a Silicon

  14. QCCM Center for Quantum Algorithms

    DTIC Science & Technology

    2008-10-17

    algorithms (e.g., quantum walks and adiabatic computing ), as well as theoretical advances relating algorithms to physical implementations (e.g...Park, NC 27709-2211 15. SUBJECT TERMS Quantum algorithms, quantum computing , fault-tolerant error correction Richard Cleve MITACS East Academic...0511200 Algebraic results on quantum automata A. Ambainis, M. Beaudry, M. Golovkins, A. Kikusts, M. Mercer, D. Thrien Theory of Computing Systems 39(2006

  15. Spiral bacterial foraging optimization method: Algorithm, evaluation and convergence analysis

    NASA Astrophysics Data System (ADS)

    Kasaiezadeh, Alireza; Khajepour, Amir; Waslander, Steven L.

    2014-04-01

    A biologically-inspired algorithm called Spiral Bacterial Foraging Optimization (SBFO) is investigated in this article. SBFO, previously proposed by the same authors, is a multi-agent, gradient-based algorithm that minimizes both the main objective function (local cost) and the distance between each agent and a temporary central point (global cost). A random jump is included normal to the connecting line of each agent to the central point, which produces a vortex around the temporary central point. This random jump is also suitable to cope with premature convergence, which is a feature of swarm-based optimization methods. The most important advantages of this algorithm are as follows: First, this algorithm involves a stochastic type of search with a deterministic convergence. Second, as gradient-based methods are employed, faster convergence is demonstrated over GA, DE, BFO, etc. Third, the algorithm can be implemented in a parallel fashion in order to decentralize large-scale computation. Fourth, the algorithm has a limited number of tunable parameters, and finally SBFO has a strong certainty of convergence which is rare in existing global optimization algorithms. A detailed convergence analysis of SBFO for continuously differentiable objective functions has also been investigated in this article.

  16. A Fast Implementation of the ISODATA Clustering Algorithm

    NASA Technical Reports Server (NTRS)

    Memarsadeghi, Nargess; Mount, David M.; Netanyahu, Nathan S.; LeMoigne, Jacqueline

    2005-01-01

    Clustering is central to many image processing and remote sensing applications. ISODATA is one of the most popular and widely used clustering methods in geoscience applications, but it can run slowly, particularly with large data sets. We present a more efficient approach to ISODATA clustering, which achieves better running times by storing the points in a kd-tree and through a modification of the way in which the algorithm estimates the dispersion of each cluster. We also present an approximate version of the algorithm which allows the user to further improve the running time, at the expense of lower fidelity in computing the nearest cluster center to each point. We provide both theoretical and empirical justification that our modified approach produces clusterings that are very similar to those produced by the standard ISODATA approach. We also provide empirical studies on both synthetic data and remotely sensed Landsat and MODIS images that show that our approach has significantly lower running times.

  17. A Fast Implementation of the Isodata Clustering Algorithm

    NASA Technical Reports Server (NTRS)

    Memarsadeghi, Nargess; Le Moigne, Jacqueline; Mount, David M.; Netanyahu, Nathan S.

    2007-01-01

    Clustering is central to many image processing and remote sensing applications. ISODATA is one of the most popular and widely used clustering methods in geoscience applications, but it can run slowly, particularly with large data sets. We present a more efficient approach to IsoDATA clustering, which achieves better running times by storing the points in a kd-tree and through a modification of the way in which the algorithm estimates the dispersion of each cluster. We also present an approximate version of the algorithm which allows the user to further improve the running time, at the expense of lower fidelity in computing the nearest cluster center to each point. We provide both theoretical and empirical justification that our modified approach produces clusterings that are very similar to those produced by the standard ISODATA approach. We also provide empirical studies on both synthetic data and remotely sensed Landsat and MODIS images that show that our approach has significantly lower running times.

  18. Portfolio optimization by using linear programing models based on genetic algorithm

    NASA Astrophysics Data System (ADS)

    Sukono; Hidayat, Y.; Lesmana, E.; Putra, A. S.; Napitupulu, H.; Supian, S.

    2018-01-01

    In this paper, we discussed the investment portfolio optimization using linear programming model based on genetic algorithms. It is assumed that the portfolio risk is measured by absolute standard deviation, and each investor has a risk tolerance on the investment portfolio. To complete the investment portfolio optimization problem, the issue is arranged into a linear programming model. Furthermore, determination of the optimum solution for linear programming is done by using a genetic algorithm. As a numerical illustration, we analyze some of the stocks traded on the capital market in Indonesia. Based on the analysis, it is shown that the portfolio optimization performed by genetic algorithm approach produces more optimal efficient portfolio, compared to the portfolio optimization performed by a linear programming algorithm approach. Therefore, genetic algorithms can be considered as an alternative on determining the investment portfolio optimization, particularly using linear programming models.

  19. Land cover and land use mapping of the iSimangaliso Wetland Park, South Africa: comparison of oblique and orthogonal random forest algorithms

    NASA Astrophysics Data System (ADS)

    Bassa, Zaakirah; Bob, Urmilla; Szantoi, Zoltan; Ismail, Riyad

    2016-01-01

    In recent years, the popularity of tree-based ensemble methods for land cover classification has increased significantly. Using WorldView-2 image data, we evaluate the potential of the oblique random forest algorithm (oRF) to classify a highly heterogeneous protected area. In contrast to the random forest (RF) algorithm, the oRF algorithm builds multivariate trees by learning the optimal split using a supervised model. The oRF binary algorithm is adapted to a multiclass land cover and land use application using both the "one-against-one" and "one-against-all" combination approaches. Results show that the oRF algorithms are capable of achieving high classification accuracies (>80%). However, there was no statistical difference in classification accuracies obtained by the oRF algorithms and the more popular RF algorithm. For all the algorithms, user accuracies (UAs) and producer accuracies (PAs) >80% were recorded for most of the classes. Both the RF and oRF algorithms poorly classified the indigenous forest class as indicated by the low UAs and PAs. Finally, the results from this study advocate and support the utility of the oRF algorithm for land cover and land use mapping of protected areas using WorldView-2 image data.

  20. A comparative study of Message Digest 5(MD5) and SHA256 algorithm

    NASA Astrophysics Data System (ADS)

    Rachmawati, D.; Tarigan, J. T.; Ginting, A. B. C.

    2018-03-01

    The document is a collection of written or printed data containing information. The more rapid advancement of technology, the integrity of a document should be kept. Because of the nature of an open document means the document contents can be read and modified by many parties so that the integrity of the information as a content of the document is not preserved. To maintain the integrity of the data, it needs to create a mechanism which is called a digital signature. A digital signature is a specific code which is generated from the function of producing a digital signature. One of the algorithms that used to create the digital signature is a hash function. There are many hash functions. Two of them are message digest 5 (MD5) and SHA256. Those both algorithms certainly have its advantages and disadvantages of each. The purpose of this research is to determine the algorithm which is better. The parameters which used to compare that two algorithms are the running time and complexity. The research results obtained from the complexity of the Algorithms MD5 and SHA256 is the same, i.e., ⊖ (N), but regarding the speed is obtained that MD5 is better compared to SHA256.

  1. Timing Analysis with INTEGRAL: Comparing Different Reconstruction Algorithms

    NASA Technical Reports Server (NTRS)

    Grinberg, V.; Kreykenboehm, I.; Fuerst, F.; Wilms, J.; Pottschmidt, K.; Bel, M. Cadolle; Rodriquez, J.; Marcu, D. M.; Suchy, S.; Markowitz, A.; hide

    2010-01-01

    INTEGRAL is one of the few instruments capable of detecting X-rays above 20keV. It is therefore in principle well suited for studying X-ray variability in this regime. Because INTEGRAL uses coded mask instruments for imaging, the reconstruction of light curves of X-ray sources is highly non-trivial. We present results from the comparison of two commonly employed algorithms, which primarily measure flux from mask deconvolution (ii-lc-extract) and from calculating the pixel illuminated fraction (ii-light). Both methods agree well for timescales above about 10 s, the highest time resolution for which image reconstruction is possible. For higher time resolution, ii-light produces meaningful results, although the overall variance of the lightcurves is not preserved.

  2. DNABIT Compress - Genome compression algorithm.

    PubMed

    Rajarajeswari, Pothuraju; Apparao, Allam

    2011-01-22

    Data compression is concerned with how information is organized in data. Efficient storage means removal of redundancy from the data being stored in the DNA molecule. Data compression algorithms remove redundancy and are used to understand biologically important molecules. We present a compression algorithm, "DNABIT Compress" for DNA sequences based on a novel algorithm of assigning binary bits for smaller segments of DNA bases to compress both repetitive and non repetitive DNA sequence. Our proposed algorithm achieves the best compression ratio for DNA sequences for larger genome. Significantly better compression results show that "DNABIT Compress" algorithm is the best among the remaining compression algorithms. While achieving the best compression ratios for DNA sequences (Genomes),our new DNABIT Compress algorithm significantly improves the running time of all previous DNA compression programs. Assigning binary bits (Unique BIT CODE) for (Exact Repeats, Reverse Repeats) fragments of DNA sequence is also a unique concept introduced in this algorithm for the first time in DNA compression. This proposed new algorithm could achieve the best compression ratio as much as 1.58 bits/bases where the existing best methods could not achieve a ratio less than 1.72 bits/bases.

  3. The Chorus Conflict and Loss of Separation Resolution Algorithms

    NASA Technical Reports Server (NTRS)

    Butler, Ricky W.; Hagen, George E.; Maddalon, Jeffrey M.

    2013-01-01

    The Chorus software is designed to investigate near-term, tactical conflict and loss of separation detection and resolution concepts for air traffic management. This software is currently being used in two different problem domains: en-route self- separation and sense and avoid for unmanned aircraft systems. This paper describes the core resolution algorithms that are part of Chorus. The combination of several features of the Chorus program distinguish this software from other approaches to conflict and loss of separation resolution. First, the program stores a history of state information over time which enables it to handle communication dropouts and take advantage of previous input data. Second, the underlying conflict algorithms find resolutions that solve the most urgent conflict, but also seek to prevent secondary conflicts with the other aircraft. Third, if the program is run on multiple aircraft, and the two aircraft maneuver at the same time, the result will be implicitly co-ordinated. This implicit coordination property is established by ensuring that a resolution produced by Chorus will comply with a mathematically-defined criteria whose correctness has been formally verified. Fourth, the program produces both instantaneous solutions and kinematic solutions, which are based on simple accel- eration models. Finally, the program provides resolutions for recovery from loss of separation. Different versions of this software are implemented as Java and C++ software programs, respectively.

  4. An improved shuffled frog leaping algorithm based evolutionary framework for currency exchange rate prediction

    NASA Astrophysics Data System (ADS)

    Dash, Rajashree

    2017-11-01

    Forecasting purchasing power of one currency with respect to another currency is always an interesting topic in the field of financial time series prediction. Despite the existence of several traditional and computational models for currency exchange rate forecasting, there is always a need for developing simpler and more efficient model, which will produce better prediction capability. In this paper, an evolutionary framework is proposed by using an improved shuffled frog leaping (ISFL) algorithm with a computationally efficient functional link artificial neural network (CEFLANN) for prediction of currency exchange rate. The model is validated by observing the monthly prediction measures obtained for three currency exchange data sets such as USD/CAD, USD/CHF, and USD/JPY accumulated within same period of time. The model performance is also compared with two other evolutionary learning techniques such as Shuffled frog leaping algorithm and Particle Swarm optimization algorithm. Practical analysis of results suggest that, the proposed model developed using the ISFL algorithm with CEFLANN network is a promising predictor model for currency exchange rate prediction compared to other models included in the study.

  5. Firefly algorithm with chaos

    NASA Astrophysics Data System (ADS)

    Gandomi, A. H.; Yang, X.-S.; Talatahari, S.; Alavi, A. H.

    2013-01-01

    A recently developed metaheuristic optimization algorithm, firefly algorithm (FA), mimics the social behavior of fireflies based on the flashing and attraction characteristics of fireflies. In the present study, we will introduce chaos into FA so as to increase its global search mobility for robust global optimization. Detailed studies are carried out on benchmark problems with different chaotic maps. Here, 12 different chaotic maps are utilized to tune the attractive movement of the fireflies in the algorithm. The results show that some chaotic FAs can clearly outperform the standard FA.

  6. An efficient algorithm for pairwise local alignment of protein interaction networks

    DOE PAGES

    Chen, Wenbin; Schmidt, Matthew; Tian, Wenhong; ...

    2015-04-01

    Recently, researchers seeking to understand, modify, and create beneficial traits in organisms have looked for evolutionarily conserved patterns of protein interactions. Their conservation likely means that the proteins of these conserved functional modules are important to the trait's expression. In this paper, we formulate the problem of identifying these conserved patterns as a graph optimization problem, and develop a fast heuristic algorithm for this problem. We compare the performance of our network alignment algorithm to that of the MaWISh algorithm [Koyuturk M, Kim Y, Topkara U, Subramaniam S, Szpankowski W, Grama A, Pairwise alignment of protein interaction networks, J Computmore » Biol 13(2): 182-199, 2006.], which bases its search algorithm on a related decision problem formulation. We find that our algorithm discovers conserved modules with a larger number of proteins in an order of magnitude less time. In conclusion, the protein sets found by our algorithm correspond to known conserved functional modules at comparable precision and recall rates as those produced by the MaWISh algorithm.« less

  7. A Comparison of Four-Image Reconstruction Algorithms for 3-D PET Imaging of MDAPET Camera Using Phantom Data

    NASA Astrophysics Data System (ADS)

    Baghaei, H.; Wong, Wai-Hoi; Uribe, J.; Li, Hongdi; Wang, Yu; Liu, Yaqiang; Xing, Tao; Ramirez, R.; Xie, Shuping; Kim, Soonseok

    2004-10-01

    We compared two fully three-dimensional (3-D) image reconstruction algorithms and two 3-D rebinning algorithms followed by reconstruction with a two-dimensional (2-D) filtered-backprojection algorithm for 3-D positron emission tomography (PET) imaging. The two 3-D image reconstruction algorithms were ordered-subsets expectation-maximization (3D-OSEM) and 3-D reprojection (3DRP) algorithms. The two rebinning algorithms were Fourier rebinning (FORE) and single slice rebinning (SSRB). The 3-D projection data used for this work were acquired with a high-resolution PET scanner (MDAPET) with an intrinsic transaxial resolution of 2.8 mm. The scanner has 14 detector rings covering an axial field-of-view of 38.5 mm. We scanned three phantoms: 1) a uniform cylindrical phantom with inner diameter of 21.5 cm; 2) a uniform 11.5-cm cylindrical phantom with four embedded small hot lesions with diameters of 3, 4, 5, and 6 mm; and 3) the 3-D Hoffman brain phantom with three embedded small hot lesion phantoms with diameters of 3, 5, and 8.6 mm in a warm background. Lesions were placed at different radial and axial distances. We evaluated the different reconstruction methods for MDAPET camera by comparing the noise level of images, contrast recovery, and hot lesion detection, and visually compared images. We found that overall the 3D-OSEM algorithm, especially when images post filtered with the Metz filter, produced the best results in terms of contrast-noise tradeoff, and detection of hot spots, and reproduction of brain phantom structures. Even though the MDAPET camera has a relatively small maximum axial acceptance (/spl plusmn/5 deg), images produced with the 3DRP algorithm had slightly better contrast recovery and reproduced the structures of the brain phantom slightly better than the faster 2-D rebinning methods.

  8. Accelerated fast iterative shrinkage thresholding algorithms for sparsity-regularized cone-beam CT image reconstruction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xu, Qiaofeng; Sawatzky, Alex; Anastasio, Mark A., E-mail: anastasio@wustl.edu

    Purpose: The development of iterative image reconstruction algorithms for cone-beam computed tomography (CBCT) remains an active and important research area. Even with hardware acceleration, the overwhelming majority of the available 3D iterative algorithms that implement nonsmooth regularizers remain computationally burdensome and have not been translated for routine use in time-sensitive applications such as image-guided radiation therapy (IGRT). In this work, two variants of the fast iterative shrinkage thresholding algorithm (FISTA) are proposed and investigated for accelerated iterative image reconstruction in CBCT. Methods: Algorithm acceleration was achieved by replacing the original gradient-descent step in the FISTAs by a subproblem that ismore » solved by use of the ordered subset simultaneous algebraic reconstruction technique (OS-SART). Due to the preconditioning matrix adopted in the OS-SART method, two new weighted proximal problems were introduced and corresponding fast gradient projection-type algorithms were developed for solving them. We also provided efficient numerical implementations of the proposed algorithms that exploit the massive data parallelism of multiple graphics processing units. Results: The improved rates of convergence of the proposed algorithms were quantified in computer-simulation studies and by use of clinical projection data corresponding to an IGRT study. The accelerated FISTAs were shown to possess dramatically improved convergence properties as compared to the standard FISTAs. For example, the number of iterations to achieve a specified reconstruction error could be reduced by an order of magnitude. Volumetric images reconstructed from clinical data were produced in under 4 min. Conclusions: The FISTA achieves a quadratic convergence rate and can therefore potentially reduce the number of iterations required to produce an image of a specified image quality as compared to first-order methods. We have proposed and

  9. Selfish Gene Algorithm Vs Genetic Algorithm: A Review

    NASA Astrophysics Data System (ADS)

    Ariff, Norharyati Md; Khalid, Noor Elaiza Abdul; Hashim, Rathiah; Noor, Noorhayati Mohamed

    2016-11-01

    Evolutionary algorithm is one of the algorithms inspired by the nature. Within little more than a decade hundreds of papers have reported successful applications of EAs. In this paper, the Selfish Gene Algorithms (SFGA), as one of the latest evolutionary algorithms (EAs) inspired from the Selfish Gene Theory which is an interpretation of Darwinian Theory ideas from the biologist Richards Dawkins on 1989. In this paper, following a brief introduction to the Selfish Gene Algorithm (SFGA), the chronology of its evolution is presented. It is the purpose of this paper is to present an overview of the concepts of Selfish Gene Algorithm (SFGA) as well as its opportunities and challenges. Accordingly, the history, step involves in the algorithm are discussed and its different applications together with an analysis of these applications are evaluated.

  10. Cloud Model Bat Algorithm

    PubMed Central

    Zhou, Yongquan; Xie, Jian; Li, Liangliang; Ma, Mingzhi

    2014-01-01

    Bat algorithm (BA) is a novel stochastic global optimization algorithm. Cloud model is an effective tool in transforming between qualitative concepts and their quantitative representation. Based on the bat echolocation mechanism and excellent characteristics of cloud model on uncertainty knowledge representation, a new cloud model bat algorithm (CBA) is proposed. This paper focuses on remodeling echolocation model based on living and preying characteristics of bats, utilizing the transformation theory of cloud model to depict the qualitative concept: “bats approach their prey.” Furthermore, Lévy flight mode and population information communication mechanism of bats are introduced to balance the advantage between exploration and exploitation. The simulation results show that the cloud model bat algorithm has good performance on functions optimization. PMID:24967425

  11. Classification of voting algorithms for N-version software

    NASA Astrophysics Data System (ADS)

    Tsarev, R. Yu; Durmuş, M. S.; Üstoglu, I.; Morozov, V. A.

    2018-05-01

    A voting algorithm in N-version software is a crucial component that evaluates the execution of each of the N versions and determines the correct result. Obviously, the result of the voting algorithm determines the outcome of the N-version software in general. Thus, the choice of the voting algorithm is a vital issue. A lot of voting algorithms were already developed and they may be selected for implementation based on the specifics of the analysis of input data. However, the voting algorithms applied in N-version software are not classified. This article presents an overview of classic and recent voting algorithms used in N-version software and the authors' classification of the voting algorithms. Moreover, the steps of the voting algorithms are presented and the distinctive features of the voting algorithms in Nversion software are defined.

  12. Machine Learning Algorithms for Automated Satellite Snow and Sea Ice Detection

    NASA Astrophysics Data System (ADS)

    Bonev, George

    The continuous mapping of snow and ice cover, particularly in the arctic and poles, are critical to understanding the earth and atmospheric science. Much of the world's sea ice and snow covers the most inhospitable places, making measurements from satellite-based remote sensors essential. Despite the wealth of data from these instruments many challenges remain. For instance, remote sensing instruments reside on-board different satellites and observe the earth at different portions of the electromagnetic spectrum with different spatial footprints. Integrating and fusing this information to make estimates of the surface is a subject of active research. In response to these challenges, this dissertation will present two algorithms that utilize methods from statistics and machine learning, with the goal of improving on the quality and accuracy of current snow and sea ice detection products. The first algorithm aims at implementing snow detection using optical/infrared instrument data. The novelty in this approach is that the classifier is trained using ground station measurements of snow depth that are collocated with the reflectance observed at the satellite. Several classification methods are compared using this training data to identify the one yielding the highest accuracy and optimal space/time complexity. The algorithm is then evaluated against the current operational NASA snow product and it is found that it produces comparable and in some cases superior accuracy results. The second algorithm presents a fully automated approach to sea ice detection that integrates data obtained from passive microwave and optical/infrared satellite instruments. For a particular region of interest the algorithm generates sea ice maps of each individual satellite overpass and then aggregates them to a daily composite level, maximizing the amount of high resolution information available. The algorithm is evaluated at both, the individual satellite overpass level, and at the daily

  13. A Spectrally Selective Attenuation Mechanism-Based Kpar Algorithm for Biomass Heating Effect Simulation in the Open Ocean

    NASA Astrophysics Data System (ADS)

    Chen, Jun; Zhang, Xiangguang; Xing, Xiaogang; Ishizaka, Joji; Yu, Zhifeng

    2017-12-01

    Quantifying the diffuse attenuation coefficient of the photosynthetically available radiation (Kpar) can improve our knowledge of euphotic depth (Zeu) and biomass heating effects in the upper layers of oceans. An algorithm to semianalytically derive Kpar from remote sensing reflectance (Rrs) is developed for the global open oceans. This algorithm includes the following two portions: (1) a neural network model for deriving the diffuse attention coefficients (Kd) that considers the residual error in satellite Rrs, and (2) a three band depth-dependent Kpar algorithm (TDKA) for describing the spectrally selective attenuation mechanism of underwater solar radiation in the open oceans. This algorithm is evaluated with both in situ PAR profile data and satellite images, and the results show that it can produce acceptable PAR profile estimations while clearly removing the impacts of satellite residual errors on Kpar estimations. Furthermore, the performance of the TDKA algorithm is evaluated by its applicability in Zeu derivation and mean temperature within a mixed layer depth (TML) simulation, and the results show that it can significantly decrease the uncertainty in both compared with the classical chlorophyll-a concentration-based Kpar algorithm. Finally, the TDKA algorithm is applied in simulating biomass heating effects in the Sargasso Sea near Bermuda, with new Kpar data it is found that the biomass heating effects can lead to a 3.4°C maximum positive difference in temperature in the upper layers but could result in a 0.67°C maximum negative difference in temperature in the deep layers.

  14. File text security using Hybrid Cryptosystem with Playfair Cipher Algorithm and Knapsack Naccache-Stern Algorithm

    NASA Astrophysics Data System (ADS)

    Amalia; Budiman, M. A.; Sitepu, R.

    2018-03-01

    Cryptography is one of the best methods to keep the information safe from security attack by unauthorized people. At present, Many studies had been done by previous researchers to generate a more robust cryptographic algorithm to provide high security for data communication. To strengthen data security, one of the methods is hybrid cryptosystem method that combined symmetric and asymmetric algorithm. In this study, we observed a hybrid cryptosystem method contain Modification Playfair Cipher 16x16 algorithm as a symmetric algorithm and Knapsack Naccache-Stern as an asymmetric algorithm. We observe a running time of this hybrid algorithm with some of the various experiments. We tried different amount of characters to be tested which are 10, 100, 1000, 10000 and 100000 characters and we also examined the algorithm with various key’s length which are 10, 20, 30, 40 of key length. The result of our study shows that the processing time for encryption and decryption process each algorithm is linearly proportional, it means the longer messages character then, the more significant times needed to encrypt and decrypt the messages. The encryption running time of Knapsack Naccache-Stern algorithm takes a longer time than its decryption, while the encryption running time of modification Playfair Cipher 16x16 algorithm takes less time than its decryption.

  15. QPSO-Based Adaptive DNA Computing Algorithm

    PubMed Central

    Karakose, Mehmet; Cigdem, Ugur

    2013-01-01

    DNA (deoxyribonucleic acid) computing that is a new computation model based on DNA molecules for information storage has been increasingly used for optimization and data analysis in recent years. However, DNA computing algorithm has some limitations in terms of convergence speed, adaptability, and effectiveness. In this paper, a new approach for improvement of DNA computing is proposed. This new approach aims to perform DNA computing algorithm with adaptive parameters towards the desired goal using quantum-behaved particle swarm optimization (QPSO). Some contributions provided by the proposed QPSO based on adaptive DNA computing algorithm are as follows: (1) parameters of population size, crossover rate, maximum number of operations, enzyme and virus mutation rate, and fitness function of DNA computing algorithm are simultaneously tuned for adaptive process, (2) adaptive algorithm is performed using QPSO algorithm for goal-driven progress, faster operation, and flexibility in data, and (3) numerical realization of DNA computing algorithm with proposed approach is implemented in system identification. Two experiments with different systems were carried out to evaluate the performance of the proposed approach with comparative results. Experimental results obtained with Matlab and FPGA demonstrate ability to provide effective optimization, considerable convergence speed, and high accuracy according to DNA computing algorithm. PMID:23935409

  16. A set-covering based heuristic algorithm for the periodic vehicle routing problem.

    PubMed

    Cacchiani, V; Hemmelmayr, V C; Tricoire, F

    2014-01-30

    We present a hybrid optimization algorithm for mixed-integer linear programming, embedding both heuristic and exact components. In order to validate it we use the periodic vehicle routing problem (PVRP) as a case study. This problem consists of determining a set of minimum cost routes for each day of a given planning horizon, with the constraints that each customer must be visited a required number of times (chosen among a set of valid day combinations), must receive every time the required quantity of product, and that the number of routes per day (each respecting the capacity of the vehicle) does not exceed the total number of available vehicles. This is a generalization of the well-known vehicle routing problem (VRP). Our algorithm is based on the linear programming (LP) relaxation of a set-covering-like integer linear programming formulation of the problem, with additional constraints. The LP-relaxation is solved by column generation, where columns are generated heuristically by an iterated local search algorithm. The whole solution method takes advantage of the LP-solution and applies techniques of fixing and releasing of the columns as a local search, making use of a tabu list to avoid cycling. We show the results of the proposed algorithm on benchmark instances from the literature and compare them to the state-of-the-art algorithms, showing the effectiveness of our approach in producing good quality solutions. In addition, we report the results on realistic instances of the PVRP introduced in Pacheco et al. (2011)  [24] and on benchmark instances of the periodic traveling salesman problem (PTSP), showing the efficacy of the proposed algorithm on these as well. Finally, we report the new best known solutions found for all the tested problems.

  17. A set-covering based heuristic algorithm for the periodic vehicle routing problem

    PubMed Central

    Cacchiani, V.; Hemmelmayr, V.C.; Tricoire, F.

    2014-01-01

    We present a hybrid optimization algorithm for mixed-integer linear programming, embedding both heuristic and exact components. In order to validate it we use the periodic vehicle routing problem (PVRP) as a case study. This problem consists of determining a set of minimum cost routes for each day of a given planning horizon, with the constraints that each customer must be visited a required number of times (chosen among a set of valid day combinations), must receive every time the required quantity of product, and that the number of routes per day (each respecting the capacity of the vehicle) does not exceed the total number of available vehicles. This is a generalization of the well-known vehicle routing problem (VRP). Our algorithm is based on the linear programming (LP) relaxation of a set-covering-like integer linear programming formulation of the problem, with additional constraints. The LP-relaxation is solved by column generation, where columns are generated heuristically by an iterated local search algorithm. The whole solution method takes advantage of the LP-solution and applies techniques of fixing and releasing of the columns as a local search, making use of a tabu list to avoid cycling. We show the results of the proposed algorithm on benchmark instances from the literature and compare them to the state-of-the-art algorithms, showing the effectiveness of our approach in producing good quality solutions. In addition, we report the results on realistic instances of the PVRP introduced in Pacheco et al. (2011)  [24] and on benchmark instances of the periodic traveling salesman problem (PTSP), showing the efficacy of the proposed algorithm on these as well. Finally, we report the new best known solutions found for all the tested problems. PMID:24748696

  18. [The algorithm for planning cosmonauts' timeline in flight (by the results of long-duration Mir mission)].

    PubMed

    Nechaev, A P

    2001-01-01

    Results of the investigation of the interrelation between cosmonauts' erroneous actions and work and rest schedule intensity in fourteen long-duration Mir missions are presented in the paper. The statistical significance of this dependence has been established, and its nonlinear character has been revealed. An algorithm of short-range planning of crew operations has been developed based on determination of critical crew work loads deduced from increases in erroneous actions. Together with other methods the suggested approach may be used to raise cosmonauts' performance reliability.

  19. Multifractal analysis of multiparticle emission data in the framework of visibility graph and sandbox algorithm

    NASA Astrophysics Data System (ADS)

    Mali, P.; Manna, S. K.; Mukhopadhyay, A.; Haldar, P. K.; Singh, G.

    2018-03-01

    Multiparticle emission data in nucleus-nucleus collisions are studied in a graph theoretical approach. The sandbox algorithm used to analyze complex networks is employed to characterize the multifractal properties of the visibility graphs associated with the pseudorapidity distribution of charged particles produced in high-energy heavy-ion collisions. Experimental data on 28Si+Ag/Br interaction at laboratory energy Elab = 14 . 5 A GeV, and 16O+Ag/Br and 32S+Ag/Br interactions both at Elab = 200 A GeV, are used in this analysis. We observe a scale free nature of the degree distributions of the visibility and horizontal visibility graphs associated with the event-wise pseudorapidity distributions. Equivalent event samples simulated by ultra-relativistic quantum molecular dynamics, produce degree distributions that are almost identical to the respective experiment. However, the multifractal variables obtained by using sandbox algorithm for the experiment to some extent differ from the respective simulated results.

  20. The cascaded moving k-means and fuzzy c-means clustering algorithms for unsupervised segmentation of malaria images

    NASA Astrophysics Data System (ADS)

    Abdul-Nasir, Aimi Salihah; Mashor, Mohd Yusoff; Halim, Nurul Hazwani Abd; Mohamed, Zeehaida

    2015-05-01

    Malaria is a life-threatening parasitic infectious disease that corresponds for nearly one million deaths each year. Due to the requirement of prompt and accurate diagnosis of malaria, the current study has proposed an unsupervised pixel segmentation based on clustering algorithm in order to obtain the fully segmented red blood cells (RBCs) infected with malaria parasites based on the thin blood smear images of P. vivax species. In order to obtain the segmented infected cell, the malaria images are first enhanced by using modified global contrast stretching technique. Then, an unsupervised segmentation technique based on clustering algorithm has been applied on the intensity component of malaria image in order to segment the infected cell from its blood cells background. In this study, cascaded moving k-means (MKM) and fuzzy c-means (FCM) clustering algorithms has been proposed for malaria slide image segmentation. After that, median filter algorithm has been applied to smooth the image as well as to remove any unwanted regions such as small background pixels from the image. Finally, seeded region growing area extraction algorithm has been applied in order to remove large unwanted regions that are still appeared on the image due to their size in which cannot be cleaned by using median filter. The effectiveness of the proposed cascaded MKM and FCM clustering algorithms has been analyzed qualitatively and quantitatively by comparing the proposed cascaded clustering algorithm with MKM and FCM clustering algorithms. Overall, the results indicate that segmentation using the proposed cascaded clustering algorithm has produced the best segmentation performances by achieving acceptable sensitivity as well as high specificity and accuracy values compared to the segmentation results provided by MKM and FCM algorithms.

  1. Curiosity Search: Producing Generalists by Encouraging Individuals to Continually Explore and Acquire Skills throughout Their Lifetime.

    PubMed

    Stanton, Christopher; Clune, Jeff

    2016-01-01

    Natural animals are renowned for their ability to acquire a diverse and general skill set over the course of their lifetime. However, research in artificial intelligence has yet to produce agents that acquire all or even most of the available skills in non-trivial environments. One candidate algorithm for encouraging the production of such individuals is Novelty Search, which pressures organisms to exhibit different behaviors from other individuals. However, we hypothesized that Novelty Search would produce sub-populations of specialists, in which each individual possesses a subset of skills, but no one organism acquires all or most of the skills. In this paper, we propose a new algorithm called Curiosity Search, which is designed to produce individuals that acquire as many skills as possible during their lifetime. We show that in a multiple-skill maze environment, Curiosity Search does produce individuals that explore their entire domain, while a traditional implementation of Novelty Search produces specialists. However, we reveal that when modified to encourage intra-life behavioral diversity, Novelty Search can produce organisms that explore almost as much of their environment as Curiosity Search, although Curiosity Search retains a significant performance edge. Finally, we show that Curiosity Search is a useful helper objective when combined with Novelty Search, producing individuals that acquire significantly more skills than either algorithm alone.

  2. Curiosity Search: Producing Generalists by Encouraging Individuals to Continually Explore and Acquire Skills throughout Their Lifetime

    PubMed Central

    Clune, Jeff

    2016-01-01

    Natural animals are renowned for their ability to acquire a diverse and general skill set over the course of their lifetime. However, research in artificial intelligence has yet to produce agents that acquire all or even most of the available skills in non-trivial environments. One candidate algorithm for encouraging the production of such individuals is Novelty Search, which pressures organisms to exhibit different behaviors from other individuals. However, we hypothesized that Novelty Search would produce sub-populations of specialists, in which each individual possesses a subset of skills, but no one organism acquires all or most of the skills. In this paper, we propose a new algorithm called Curiosity Search, which is designed to produce individuals that acquire as many skills as possible during their lifetime. We show that in a multiple-skill maze environment, Curiosity Search does produce individuals that explore their entire domain, while a traditional implementation of Novelty Search produces specialists. However, we reveal that when modified to encourage intra-life behavioral diversity, Novelty Search can produce organisms that explore almost as much of their environment as Curiosity Search, although Curiosity Search retains a significant performance edge. Finally, we show that Curiosity Search is a useful helper objective when combined with Novelty Search, producing individuals that acquire significantly more skills than either algorithm alone. PMID:27589267

  3. Abdomen disease diagnosis in CT images using flexiscale curvelet transform and improved genetic algorithm.

    PubMed

    Sethi, Gaurav; Saini, B S

    2015-12-01

    This paper presents an abdomen disease diagnostic system based on the flexi-scale curvelet transform, which uses different optimal scales for extracting features from computed tomography (CT) images. To optimize the scale of the flexi-scale curvelet transform, we propose an improved genetic algorithm. The conventional genetic algorithm assumes that fit parents will likely produce the healthiest offspring that leads to the least fit parents accumulating at the bottom of the population, reducing the fitness of subsequent populations and delaying the optimal solution search. In our improved genetic algorithm, combining the chromosomes of a low-fitness and a high-fitness individual increases the probability of producing high-fitness offspring. Thereby, all of the least fit parent chromosomes are combined with high fit parent to produce offspring for the next population. In this way, the leftover weak chromosomes cannot damage the fitness of subsequent populations. To further facilitate the search for the optimal solution, our improved genetic algorithm adopts modified elitism. The proposed method was applied to 120 CT abdominal images; 30 images each of normal subjects, cysts, tumors and stones. The features extracted by the flexi-scale curvelet transform were more discriminative than conventional methods, demonstrating the potential of our method as a diagnostic tool for abdomen diseases.

  4. A modified MOD16 algorithm to estimate evapotranspiration over alpine meadow on the Tibetan Plateau, China

    NASA Astrophysics Data System (ADS)

    Chang, Yaping; Qin, Dahe; Ding, Yongjian; Zhao, Qiudong; Zhang, Shiqiang

    2018-06-01

    The long-term change of evapotranspiration (ET) is crucial for managing water resources in areas with extreme climates, such as the Tibetan Plateau (TP). This study proposed a modified algorithm for estimating ET based on the MOD16 algorithm on a global scale over alpine meadow on the TP in China. Wind speed and vegetation height were integrated to estimate aerodynamic resistance, while the temperature and moisture constraints for stomatal conductance were revised based on the technique proposed by Fisher et al. (2008). Moreover, Fisher's method for soil evaporation was adopted to reduce the uncertainty in soil evaporation estimation. Five representative alpine meadow sites on the TP were selected to investigate the performance of the modified algorithm. Comparisons were made between the ET observed using the Eddy Covariance (EC) and estimated using both the original and modified algorithms. The results revealed that the modified algorithm performed better than the original MOD16 algorithm with the coefficient of determination (R2) increasing from 0.26 to 0.68, and root mean square error (RMSE) decreasing from 1.56 to 0.78 mm d-1. The modified algorithm performed slightly better with a higher R2 (0.70) and lower RMSE (0.61 mm d-1) for after-precipitation days than for non-precipitation days at Suli site. Contrarily, better results were obtained for non-precipitation days than for after-precipitation days at Arou, Tanggula, and Hulugou sites, indicating that the modified algorithm may be more suitable for estimating ET for non-precipitation days with higher accuracy than for after-precipitation days, which had large observation errors. The comparisons between the modified algorithm and two mainstream methods suggested that the modified algorithm could produce high accuracy ET over the alpine meadow sites on the TP.

  5. Algorithm for identifying and separating beats from arterial pulse records

    PubMed Central

    Treo, Ernesto F; Herrera, Myriam C; Valentinuzzi, Max E

    2005-01-01

    positive/negative criteria. Synchronization ability was measured through the coefficient of variation and the median value of correlation for each patient. These parameters were assessed by means of Friedman's ANOVA and Kendall Concordance test. Results Sensitivity was 97% and 91% for the two operators, respectively, while accuracy was cero for both of them. The synchronism variability analysis was significant (p < 0.01) for the two statistics, showing that the algorithm produced the best result. Conclusion The proposed algorithm showed good performance as expressed by its high sensitivity. The correlation analysis demonstrated that, from the synchronism point of view, the algorithm performed the best detection. Patients with marked arrhythmic processes are not good candidates for this kind of analysis. At most, they would be singled out by the algorithm and, thereafter, to be checked by an operator. PMID:16095532

  6. A Comparative Analysis of the ADOS-G and ADOS-2 Algorithms: Preliminary Findings.

    PubMed

    Dorlack, Taylor P; Myers, Orrin B; Kodituwakku, Piyadasa W

    2018-06-01

    The Autism Diagnostic Observation Schedule (ADOS) is a widely utilized observational assessment tool for diagnosis of autism spectrum disorders. The original ADOS was succeeded by the ADOS-G with noted improvements. More recently, the ADOS-2 was introduced to further increase its diagnostic accuracy. Studies examining the validity of the ADOS have produced mixed findings, and pooled relationship trends between the algorithm versions are yet to be analyzed. The current review seeks to compare the relative merits of the ADOS-G and ADOS-2 algorithms, Modules 1-3. Eight studies met inclusion criteria for the review, and six were selected for paired comparisons of the sensitivity and specificity of the ADOS. Results indicate several contradictory findings, underscoring the importance of further study.

  7. Evolving spiking neural networks: a novel growth algorithm exhibits unintelligent design

    NASA Astrophysics Data System (ADS)

    Schaffer, J. David

    2015-06-01

    Spiking neural networks (SNNs) have drawn considerable excitement because of their computational properties, believed to be superior to conventional von Neumann machines, and sharing properties with living brains. Yet progress building these systems has been limited because we lack a design methodology. We present a gene-driven network growth algorithm that enables a genetic algorithm (evolutionary computation) to generate and test SNNs. The genome for this algorithm grows O(n) where n is the number of neurons; n is also evolved. The genome not only specifies the network topology, but all its parameters as well. Experiments show the algorithm producing SNNs that effectively produce a robust spike bursting behavior given tonic inputs, an application suitable for central pattern generators. Even though evolution did not include perturbations of the input spike trains, the evolved networks showed remarkable robustness to such perturbations. In addition, the output spike patterns retain evidence of the specific perturbation of the inputs, a feature that could be exploited by network additions that could use this information for refined decision making if required. On a second task, a sequence detector, a discriminating design was found that might be considered an example of "unintelligent design"; extra non-functional neurons were included that, while inefficient, did not hamper its proper functioning.

  8. A Parametric k-Means Algorithm

    PubMed Central

    Tarpey, Thaddeus

    2007-01-01

    Summary The k points that optimally represent a distribution (usually in terms of a squared error loss) are called the k principal points. This paper presents a computationally intensive method that automatically determines the principal points of a parametric distribution. Cluster means from the k-means algorithm are nonparametric estimators of principal points. A parametric k-means approach is introduced for estimating principal points by running the k-means algorithm on a very large simulated data set from a distribution whose parameters are estimated using maximum likelihood. Theoretical and simulation results are presented comparing the parametric k-means algorithm to the usual k-means algorithm and an example on determining sizes of gas masks is used to illustrate the parametric k-means algorithm. PMID:17917692

  9. A novel gene network inference algorithm using predictive minimum description length approach.

    PubMed

    Chaitankar, Vijender; Ghosh, Preetam; Perkins, Edward J; Gong, Ping; Deng, Youping; Zhang, Chaoyang

    2010-05-28

    Reverse engineering of gene regulatory networks using information theory models has received much attention due to its simplicity, low computational cost, and capability of inferring large networks. One of the major problems with information theory models is to determine the threshold which defines the regulatory relationships between genes. The minimum description length (MDL) principle has been implemented to overcome this problem. The description length of the MDL principle is the sum of model length and data encoding length. A user-specified fine tuning parameter is used as control mechanism between model and data encoding, but it is difficult to find the optimal parameter. In this work, we proposed a new inference algorithm which incorporated mutual information (MI), conditional mutual information (CMI) and predictive minimum description length (PMDL) principle to infer gene regulatory networks from DNA microarray data. In this algorithm, the information theoretic quantities MI and CMI determine the regulatory relationships between genes and the PMDL principle method attempts to determine the best MI threshold without the need of a user-specified fine tuning parameter. The performance of the proposed algorithm was evaluated using both synthetic time series data sets and a biological time series data set for the yeast Saccharomyces cerevisiae. The benchmark quantities precision and recall were used as performance measures. The results show that the proposed algorithm produced less false edges and significantly improved the precision, as compared to the existing algorithm. For further analysis the performance of the algorithms was observed over different sizes of data. We have proposed a new algorithm that implements the PMDL principle for inferring gene regulatory networks from time series DNA microarray data that eliminates the need of a fine tuning parameter. The evaluation results obtained from both synthetic and actual biological data sets show that the

  10. Anti-aliasing algorithm development

    NASA Astrophysics Data System (ADS)

    Bodrucki, F.; Davis, J.; Becker, J.; Cordell, J.

    2017-10-01

    In this paper, we discuss the testing image processing algorithms for mitigation of aliasing artifacts under pulsed illumination. Previously sensors were tested, one with a fixed frame rate and one with an adjustable frame rate, which results showed different degrees of operability when subjected to a Quantum Cascade Laser (QCL) laser pulsed at the frame rate of the fixe-rate sensor. We implemented algorithms to allow the adjustable frame-rate sensor to detect the presence of aliasing artifacts, and in response, to alter the frame rate of the sensor. The result was that the sensor output showed a varying laser intensity (beat note) as opposed to a fixed signal level. A MIRAGE Infrared Scene Projector (IRSP) was used to explore the efficiency of the new algorithms, introduction secondary elements into the sensor's field of view.

  11. A genetic algorithm for replica server placement

    NASA Astrophysics Data System (ADS)

    Eslami, Ghazaleh; Toroghi Haghighat, Abolfazl

    2012-01-01

    Modern distribution systems use replication to improve communication delay experienced by their clients. Some techniques have been developed for web server replica placement. One of the previous studies was Greedy algorithm proposed by Qiu et al, that needs knowledge about network topology. In This paper, first we introduce a genetic algorithm for web server replica placement. Second, we compare our algorithm with Greedy algorithm proposed by Qiu et al, and Optimum algorithm. We found that our approach can achieve better results than Greedy algorithm proposed by Qiu et al but it's computational time is more than Greedy algorithm.

  12. A genetic algorithm for replica server placement

    NASA Astrophysics Data System (ADS)

    Eslami, Ghazaleh; Toroghi Haghighat, Abolfazl

    2011-12-01

    Modern distribution systems use replication to improve communication delay experienced by their clients. Some techniques have been developed for web server replica placement. One of the previous studies was Greedy algorithm proposed by Qiu et al, that needs knowledge about network topology. In This paper, first we introduce a genetic algorithm for web server replica placement. Second, we compare our algorithm with Greedy algorithm proposed by Qiu et al, and Optimum algorithm. We found that our approach can achieve better results than Greedy algorithm proposed by Qiu et al but it's computational time is more than Greedy algorithm.

  13. Hybrid cryptosystem implementation using fast data encipherment algorithm (FEAL) and goldwasser-micali algorithm for file security

    NASA Astrophysics Data System (ADS)

    Rachmawati, D.; Budiman, M. A.; Siburian, W. S. E.

    2018-05-01

    On the process of exchanging files, security is indispensable to avoid the theft of data. Cryptography is one of the sciences used to secure the data by way of encoding. Fast Data Encipherment Algorithm (FEAL) is a block cipher symmetric cryptographic algorithms. Therefore, the file which wants to protect is encrypted and decrypted using the algorithm FEAL. To optimize the security of the data, session key that is utilized in the algorithm FEAL encoded with the Goldwasser-Micali algorithm, which is an asymmetric cryptographic algorithm and using probabilistic concept. In the encryption process, the key was converted into binary form. The selection of values of x that randomly causes the results of the cipher key is different for each binary value. The concept of symmetry and asymmetry algorithm merger called Hybrid Cryptosystem. The use of the algorithm FEAL and Goldwasser-Micali can restore the message to its original form and the algorithm FEAL time required for encryption and decryption is directly proportional to the length of the message. However, on Goldwasser- Micali algorithm, the length of the message is not directly proportional to the time of encryption and decryption.

  14. The Texas Children's Medication Algorithm Project: Revision of the Algorithm for Pharmacotherapy of Attention-Deficit/Hyperactivity Disorder

    ERIC Educational Resources Information Center

    Pliszka, Steven R.; Crismon, M. Lynn; Hughes, Carroll W.; Corners, C. Keith; Emslie, Graham J.; Jensen, Peter S.; McCracken, James T.; Swanson, James M.; Lopez, Molly

    2006-01-01

    Objective: In 1998, the Texas Department of Mental Health and Mental Retardation developed algorithms for medication treatment of attention-deficit/hyperactivity disorder (ADHD). Advances in the psychopharmacology of ADHD and results of a feasibility study of algorithm use in community mental health centers caused the algorithm to be modified and…

  15. Algorithm Engineering: Concepts and Practice

    NASA Astrophysics Data System (ADS)

    Chimani, Markus; Klein, Karsten

    Over the last years the term algorithm engineering has become wide spread synonym for experimental evaluation in the context of algorithm development. Yet it implies even more. We discuss the major weaknesses of traditional "pen and paper" algorithmics and the ever-growing gap between theory and practice in the context of modern computer hardware and real-world problem instances. We present the key ideas and concepts of the central algorithm engineering cycle that is based on a full feedback loop: It starts with the design of the algorithm, followed by the analysis, implementation, and experimental evaluation. The results of the latter can then be reused for modifications to the algorithmic design, stronger or input-specific theoretic performance guarantees, etc. We describe the individual steps of the cycle, explaining the rationale behind them and giving examples of how to conduct these steps thoughtfully. Thereby we give an introduction to current algorithmic key issues like I/O-efficient or parallel algorithms, succinct data structures, hardware-aware implementations, and others. We conclude with two especially insightful success stories—shortest path problems and text search—where the application of algorithm engineering techniques led to tremendous performance improvements compared with previous state-of-the-art approaches.

  16. Improved Differentiation of Streptococcus pneumoniae and Other S. mitis Group Streptococci by MALDI Biotyper Using an Improved MALDI Biotyper Database Content and a Novel Result Interpretation Algorithm.

    PubMed

    Harju, Inka; Lange, Christoph; Kostrzewa, Markus; Maier, Thomas; Rantakokko-Jalava, Kaisu; Haanperä, Marjo

    2017-03-01

    Reliable distinction of Streptococcus pneumoniae and viridans group streptococci is important because of the different pathogenic properties of these organisms. Differentiation between S. pneumoniae and closely related Sreptococcus mitis species group streptococci has always been challenging, even when using such modern methods as 16S rRNA gene sequencing or matrix-assisted laser desorption ionization-time of flight (MALDI-TOF) mass spectrometry. In this study, a novel algorithm combined with an enhanced database was evaluated for differentiation between S. pneumoniae and S. mitis species group streptococci. One hundred one clinical S. mitis species group streptococcal strains and 188 clinical S. pneumoniae strains were identified by both the standard MALDI Biotyper database alone and that combined with a novel algorithm. The database update from 4,613 strains to 5,627 strains drastically improved the differentiation of S. pneumoniae and S. mitis species group streptococci: when the new database version containing 5,627 strains was used, only one of the 101 S. mitis species group isolates was misidentified as S. pneumoniae , whereas 66 of them were misidentified as S. pneumoniae when the earlier 4,613-strain MALDI Biotyper database version was used. The updated MALDI Biotyper database combined with the novel algorithm showed even better performance, producing no misidentifications of the S. mitis species group strains as S. pneumoniae All S. pneumoniae strains were correctly identified as S. pneumoniae with both the standard MALDI Biotyper database and the standard MALDI Biotyper database combined with the novel algorithm. This new algorithm thus enables reliable differentiation between pneumococci and other S. mitis species group streptococci with the MALDI Biotyper. Copyright © 2017 American Society for Microbiology.

  17. Data fusion for a vision-aided radiological detection system: Calibration algorithm performance

    NASA Astrophysics Data System (ADS)

    Stadnikia, Kelsey; Henderson, Kristofer; Martin, Allan; Riley, Phillip; Koppal, Sanjeev; Enqvist, Andreas

    2018-05-01

    In order to improve the ability to detect, locate, track and identify nuclear/radiological threats, the University of Florida nuclear detection community has teamed up with the 3D vision community to collaborate on a low cost data fusion system. The key is to develop an algorithm to fuse the data from multiple radiological and 3D vision sensors as one system. The system under development at the University of Florida is being assessed with various types of radiological detectors and widely available visual sensors. A series of experiments were devised utilizing two EJ-309 liquid organic scintillation detectors (one primary and one secondary), a Microsoft Kinect for Windows v2 sensor and a Velodyne HDL-32E High Definition LiDAR Sensor which is a highly sensitive vision sensor primarily used to generate data for self-driving cars. Each experiment consisted of 27 static measurements of a source arranged in a cube with three different distances in each dimension. The source used was Cf-252. The calibration algorithm developed is utilized to calibrate the relative 3D-location of the two different types of sensors without need to measure it by hand; thus, preventing operator manipulation and human errors. The algorithm can also account for the facility dependent deviation from ideal data fusion correlation. Use of the vision sensor to determine the location of a sensor would also limit the possible locations and it does not allow for room dependence (facility dependent deviation) to generate a detector pseudo-location to be used for data analysis later. Using manually measured source location data, our algorithm-predicted the offset detector location within an average of 20 cm calibration-difference to its actual location. Calibration-difference is the Euclidean distance from the algorithm predicted detector location to the measured detector location. The Kinect vision sensor data produced an average calibration-difference of 35 cm and the HDL-32E produced an average

  18. SubspaceEM: A Fast Maximum-a-posteriori Algorithm for Cryo-EM Single Particle Reconstruction

    PubMed Central

    Dvornek, Nicha C.; Sigworth, Fred J.; Tagare, Hemant D.

    2015-01-01

    Single particle reconstruction methods based on the maximum-likelihood principle and the expectation-maximization (E–M) algorithm are popular because of their ability to produce high resolution structures. However, these algorithms are computationally very expensive, requiring a network of computational servers. To overcome this computational bottleneck, we propose a new mathematical framework for accelerating maximum-likelihood reconstructions. The speedup is by orders of magnitude and the proposed algorithm produces similar quality reconstructions compared to the standard maximum-likelihood formulation. Our approach uses subspace approximations of the cryo-electron microscopy (cryo-EM) data and projection images, greatly reducing the number of image transformations and comparisons that are computed. Experiments using simulated and actual cryo-EM data show that speedup in overall execution time compared to traditional maximum-likelihood reconstruction reaches factors of over 300. PMID:25839831

  19. Harmony Search Algorithm for Word Sense Disambiguation.

    PubMed

    Abed, Saad Adnan; Tiun, Sabrina; Omar, Nazlia

    2015-01-01

    Word Sense Disambiguation (WSD) is the task of determining which sense of an ambiguous word (word with multiple meanings) is chosen in a particular use of that word, by considering its context. A sentence is considered ambiguous if it contains ambiguous word(s). Practically, any sentence that has been classified as ambiguous usually has multiple interpretations, but just one of them presents the correct interpretation. We propose an unsupervised method that exploits knowledge based approaches for word sense disambiguation using Harmony Search Algorithm (HSA) based on a Stanford dependencies generator (HSDG). The role of the dependency generator is to parse sentences to obtain their dependency relations. Whereas, the goal of using the HSA is to maximize the overall semantic similarity of the set of parsed words. HSA invokes a combination of semantic similarity and relatedness measurements, i.e., Jiang and Conrath (jcn) and an adapted Lesk algorithm, to perform the HSA fitness function. Our proposed method was experimented on benchmark datasets, which yielded results comparable to the state-of-the-art WSD methods. In order to evaluate the effectiveness of the dependency generator, we perform the same methodology without the parser, but with a window of words. The empirical results demonstrate that the proposed method is able to produce effective solutions for most instances of the datasets used.

  20. Harmony Search Algorithm for Word Sense Disambiguation

    PubMed Central

    Abed, Saad Adnan; Tiun, Sabrina; Omar, Nazlia

    2015-01-01

    Word Sense Disambiguation (WSD) is the task of determining which sense of an ambiguous word (word with multiple meanings) is chosen in a particular use of that word, by considering its context. A sentence is considered ambiguous if it contains ambiguous word(s). Practically, any sentence that has been classified as ambiguous usually has multiple interpretations, but just one of them presents the correct interpretation. We propose an unsupervised method that exploits knowledge based approaches for word sense disambiguation using Harmony Search Algorithm (HSA) based on a Stanford dependencies generator (HSDG). The role of the dependency generator is to parse sentences to obtain their dependency relations. Whereas, the goal of using the HSA is to maximize the overall semantic similarity of the set of parsed words. HSA invokes a combination of semantic similarity and relatedness measurements, i.e., Jiang and Conrath (jcn) and an adapted Lesk algorithm, to perform the HSA fitness function. Our proposed method was experimented on benchmark datasets, which yielded results comparable to the state-of-the-art WSD methods. In order to evaluate the effectiveness of the dependency generator, we perform the same methodology without the parser, but with a window of words. The empirical results demonstrate that the proposed method is able to produce effective solutions for most instances of the datasets used. PMID:26422368

  1. One improved LSB steganography algorithm

    NASA Astrophysics Data System (ADS)

    Song, Bing; Zhang, Zhi-hong

    2013-03-01

    It is easy to be detected by X2 and RS steganalysis with high accuracy that using LSB algorithm to hide information in digital image. We started by selecting information embedded location and modifying the information embedded method, combined with sub-affine transformation and matrix coding method, improved the LSB algorithm and a new LSB algorithm was proposed. Experimental results show that the improved one can resist the X2 and RS steganalysis effectively.

  2. A Feature and Algorithm Selection Method for Improving the Prediction of Protein Structural Class.

    PubMed

    Ni, Qianwu; Chen, Lei

    2017-01-01

    Correct prediction of protein structural class is beneficial to investigation on protein functions, regulations and interactions. In recent years, several computational methods have been proposed in this regard. However, based on various features, it is still a great challenge to select proper classification algorithm and extract essential features to participate in classification. In this study, a feature and algorithm selection method was presented for improving the accuracy of protein structural class prediction. The amino acid compositions and physiochemical features were adopted to represent features and thirty-eight machine learning algorithms collected in Weka were employed. All features were first analyzed by a feature selection method, minimum redundancy maximum relevance (mRMR), producing a feature list. Then, several feature sets were constructed by adding features in the list one by one. For each feature set, thirtyeight algorithms were executed on a dataset, in which proteins were represented by features in the set. The predicted classes yielded by these algorithms and true class of each protein were collected to construct a dataset, which were analyzed by mRMR method, yielding an algorithm list. From the algorithm list, the algorithm was taken one by one to build an ensemble prediction model. Finally, we selected the ensemble prediction model with the best performance as the optimal ensemble prediction model. Experimental results indicate that the constructed model is much superior to models using single algorithm and other models that only adopt feature selection procedure or algorithm selection procedure. The feature selection procedure or algorithm selection procedure are really helpful for building an ensemble prediction model that can yield a better performance. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.

  3. Firefly Algorithm, Lévy Flights and Global Optimization

    NASA Astrophysics Data System (ADS)

    Yang, Xin-She

    Nature-inspired algorithms such as Particle Swarm Optimization and Firefly Algorithm are among the most powerful algorithms for optimization. In this paper, we intend to formulate a new metaheuristic algorithm by combining Lévy flights with the search strategy via the Firefly Algorithm. Numerical studies and results suggest that the proposed Lévy-flight firefly algorithm is superior to existing metaheuristic algorithms. Finally implications for further research and wider applications will be discussed.

  4. Classifying spatially heterogeneous wetland communities using machine learning algorithms and spectral and textural features.

    PubMed

    Szantoi, Zoltan; Escobedo, Francisco J; Abd-Elrahman, Amr; Pearlstine, Leonard; Dewitt, Bon; Smith, Scot

    2015-05-01

    Mapping of wetlands (marsh vs. swamp vs. upland) is a common remote sensing application.Yet, discriminating between similar freshwater communities such as graminoid/sedge fromremotely sensed imagery is more difficult. Most of this activity has been performed using medium to low resolution imagery. There are only a few studies using highspatial resolutionimagery and machine learning image classification algorithms for mapping heterogeneouswetland plantcommunities. This study addresses this void by analyzing whether machine learning classifierssuch as decisiontrees (DT) and artificial neural networks (ANN) can accurately classify graminoid/sedgecommunities usinghigh resolution aerial imagery and image texture data in the Everglades National Park, Florida.In addition tospectral bands, the normalized difference vegetation index, and first- and second-order texturefeatures derivedfrom the near-infrared band were analyzed. Classifier accuracies were assessed using confusiontablesand the calculated kappa coefficients of the resulting maps. The results indicated that an ANN(multilayerperceptron based on backpropagation) algorithm produced a statistically significantly higheraccuracy(82.04%) than the DT (QUEST) algorithm (80.48%) or the maximum likelihood (80.56%)classifier (α<0.05). Findings show that using multiple window sizes provided the best results. First-ordertexture featuresalso provided computational advantages and results that were not significantly different fromthose usingsecond-order texture features.

  5. Scheduling Algorithms for Maximizing Throughput with Zero-Forcing Beamforming in a MIMO Wireless System

    NASA Astrophysics Data System (ADS)

    Foronda, Augusto; Ohta, Chikara; Tamaki, Hisashi

    Dirty paper coding (DPC) is a strategy to achieve the region capacity of multiple input multiple output (MIMO) downlink channels and a DPC scheduler is throughput optimal if users are selected according to their queue states and current rates. However, DPC is difficult to implement in practical systems. One solution, zero-forcing beamforming (ZFBF) strategy has been proposed to achieve the same asymptotic sum rate capacity as that of DPC with an exhaustive search over the entire user set. Some suboptimal user group selection schedulers with reduced complexity based on ZFBF strategy (ZFBF-SUS) and proportional fair (PF) scheduling algorithm (PF-ZFBF) have also been proposed to enhance the throughput and fairness among the users, respectively. However, they are not throughput optimal, fairness and throughput decrease if each user queue length is different due to different users channel quality. Therefore, we propose two different scheduling algorithms: a throughput optimal scheduling algorithm (ZFBF-TO) and a reduced complexity scheduling algorithm (ZFBF-RC). Both are based on ZFBF strategy and, at every time slot, the scheduling algorithms have to select some users based on user channel quality, user queue length and orthogonality among users. Moreover, the proposed algorithms have to produce the rate allocation and power allocation for the selected users based on a modified water filling method. We analyze the schedulers complexity and numerical results show that ZFBF-RC provides throughput and fairness improvements compared to the ZFBF-SUS and PF-ZFBF scheduling algorithms.

  6. An algorithm for minimum-cost set-point ordering in a cryogenic wind tunnel

    NASA Technical Reports Server (NTRS)

    Tripp, J. S.

    1981-01-01

    An algorithm for minimum cost ordering of set points in a cryogenic wind tunnel is developed. The procedure generates a matrix of dynamic state transition costs, which is evaluated by means of a single-volume lumped model of the cryogenic wind tunnel and the use of some idealized minimum-costs, which is evaluated by means of a single-volume lumped model of the cryogenic wind tunnel and the use of some idealized minimum-cost state-transition control strategies. A branch and bound algorithm is employed to determine the least costly sequence of state transitions from the transition-cost matrix. Some numerical results based on data for the National Transonic Facility are presented which show a strong preference for state transitions that consume to coolant. Results also show that the choice of the terminal set point in an open odering can produce a wide variation in total cost.

  7. Algorithms for optimizing cross-overs in DNA shuffling.

    PubMed

    He, Lu; Friedman, Alan M; Bailey-Kellogg, Chris

    2012-03-21

    DNA shuffling generates combinatorial libraries of chimeric genes by stochastically recombining parent genes. The resulting libraries are subjected to large-scale genetic selection or screening to identify those chimeras with favorable properties (e.g., enhanced stability or enzymatic activity). While DNA shuffling has been applied quite successfully, it is limited by its homology-dependent, stochastic nature. Consequently, it is used only with parents of sufficient overall sequence identity, and provides no control over the resulting chimeric library. This paper presents efficient methods to extend the scope of DNA shuffling to handle significantly more diverse parents and to generate more predictable, optimized libraries. Our CODNS (cross-over optimization for DNA shuffling) approach employs polynomial-time dynamic programming algorithms to select codons for the parental amino acids, allowing for zero or a fixed number of conservative substitutions. We first present efficient algorithms to optimize the local sequence identity or the nearest-neighbor approximation of the change in free energy upon annealing, objectives that were previously optimized by computationally-expensive integer programming methods. We then present efficient algorithms for more powerful objectives that seek to localize and enhance the frequency of recombination by producing "runs" of common nucleotides either overall or according to the sequence diversity of the resulting chimeras. We demonstrate the effectiveness of CODNS in choosing codons and allocating substitutions to promote recombination between parents targeted in earlier studies: two GAR transformylases (41% amino acid sequence identity), two very distantly related DNA polymerases, Pol X and β (15%), and beta-lactamases of varying identity (26-47%). Our methods provide the protein engineer with a new approach to DNA shuffling that supports substantially more diverse parents, is more deterministic, and generates more predictable

  8. Comparison of turbulence mitigation algorithms

    NASA Astrophysics Data System (ADS)

    Kozacik, Stephen T.; Paolini, Aaron; Sherman, Ariel; Bonnett, James; Kelmelis, Eric

    2017-07-01

    When capturing imagery over long distances, atmospheric turbulence often degrades the data, especially when observation paths are close to the ground or in hot environments. These issues manifest as time-varying scintillation and warping effects that decrease the effective resolution of the sensor and reduce actionable intelligence. In recent years, several image processing approaches to turbulence mitigation have shown promise. Each of these algorithms has different computational requirements, usability demands, and degrees of independence from camera sensors. They also produce different degrees of enhancement when applied to turbulent imagery. Additionally, some of these algorithms are applicable to real-time operational scenarios while others may only be suitable for postprocessing workflows. EM Photonics has been developing image-processing-based turbulence mitigation technology since 2005. We will compare techniques from the literature with our commercially available, real-time, GPU-accelerated turbulence mitigation software. These comparisons will be made using real (not synthetic), experimentally obtained data for a variety of conditions, including varying optical hardware, imaging range, subjects, and turbulence conditions. Comparison metrics will include image quality, video latency, computational complexity, and potential for real-time operation. Additionally, we will present a technique for quantitatively comparing turbulence mitigation algorithms using real images of radial resolution targets.

  9. Sorting on STAR. [CDC computer algorithm timing comparison

    NASA Technical Reports Server (NTRS)

    Stone, H. S.

    1978-01-01

    Timing comparisons are given for three sorting algorithms written for the CDC STAR computer. One algorithm is Hoare's (1962) Quicksort, which is the fastest or nearly the fastest sorting algorithm for most computers. A second algorithm is a vector version of Quicksort that takes advantage of the STAR's vector operations. The third algorithm is an adaptation of Batcher's (1968) sorting algorithm, which makes especially good use of vector operations but has a complexity of N(log N)-squared as compared with a complexity of N log N for the Quicksort algorithms. In spite of its worse complexity, Batcher's sorting algorithm is competitive with the serial version of Quicksort for vectors up to the largest that can be treated by STAR. Vector Quicksort outperforms the other two algorithms and is generally preferred. These results indicate that unusual instruction sets can introduce biases in program execution time that counter results predicted by worst-case asymptotic complexity analysis.

  10. Automated Antenna Design with Evolutionary Algorithms

    NASA Technical Reports Server (NTRS)

    Hornby, Gregory S.; Globus, Al; Linden, Derek S.; Lohn, Jason D.

    2006-01-01

    constrain the evolutionary design to a monopole wire antenna. The results of the runs produced requirements-compliant antennas that were subsequently fabricated and tested. The evolved antenna has a number of advantages with regard to power consumption, fabrication time and complexity, and performance. Lower power requirements result from achieving high gain across a wider range of elevation angles, thus allowing a broader range of angles over which maximum data throughput can be achieved. Since the evolved antenna does not require a phasing circuit, less design and fabrication work is required. In terms of overall work, the evolved antenna required approximately three person-months to design and fabricate whereas the conventional antenna required about five. Furthermore, when the mission was modified and new orbital parameters selected, a redesign of the antenna to new requirements was required. The evolutionary system was rapidly modified and a new antenna evolved in a few weeks. The evolved antenna was shown to be compliant to the ST5 mission requirements. It has an unusual organic looking structure, one that expert antenna designers would not likely produce. This antenna has been tested, baselined and is scheduled to fly this year. In addition to the ST5 antenna, our laboratory has evolved an S-band phased array antenna element design that meets the requirements for NASA's TDRS-C communications satellite scheduled for launch early next decade. A combination of fairly broad bandwidth, high efficiency and circular polarization at high gain made for another challenging design problem. We chose to constrain the evolutionary design to a crossed-element Yagi antenna. The specification called for two types of elements, one for receive only and one for transmit/receive. We were able to evolve a single element design that meets both specifications thereby simplifying the antenna and reducing testing and integration costs. The highest performance antenna found using a getic

  11. Algorithmic Trading with Developmental and Linear Genetic Programming

    NASA Astrophysics Data System (ADS)

    Wilson, Garnett; Banzhaf, Wolfgang

    A developmental co-evolutionary genetic programming approach (PAM DGP) and a standard linear genetic programming (LGP) stock trading systemare applied to a number of stocks across market sectors. Both GP techniques were found to be robust to market fluctuations and reactive to opportunities associated with stock price rise and fall, with PAMDGP generating notably greater profit in some stock trend scenarios. Both algorithms were very accurate at buying to achieve profit and selling to protect assets, while exhibiting bothmoderate trading activity and the ability to maximize or minimize investment as appropriate. The content of the trading rules produced by both algorithms are also examined in relation to stock price trend scenarios.

  12. Physics-based signal processing algorithms for micromachined cantilever arrays

    DOEpatents

    Candy, James V; Clague, David S; Lee, Christopher L; Rudd, Robert E; Burnham, Alan K; Tringe, Joseph W

    2013-11-19

    A method of using physics-based signal processing algorithms for micromachined cantilever arrays. The methods utilize deflection of a micromachined cantilever that represents the chemical, biological, or physical element being detected. One embodiment of the method comprises the steps of modeling the deflection of the micromachined cantilever producing a deflection model, sensing the deflection of the micromachined cantilever and producing a signal representing the deflection, and comparing the signal representing the deflection with the deflection model.

  13. Implementation of a Real-Time Stacking Algorithm in a Photogrammetric Digital Camera for Uavs

    NASA Astrophysics Data System (ADS)

    Audi, A.; Pierrot-Deseilligny, M.; Meynard, C.; Thom, C.

    2017-08-01

    In the recent years, unmanned aerial vehicles (UAVs) have become an interesting tool in aerial photography and photogrammetry activities. In this context, some applications (like cloudy sky surveys, narrow-spectral imagery and night-vision imagery) need a longexposure time where one of the main problems is the motion blur caused by the erratic camera movements during image acquisition. This paper describes an automatic real-time stacking algorithm which produces a high photogrammetric quality final composite image with an equivalent long-exposure time using several images acquired with short-exposure times. Our method is inspired by feature-based image registration technique. The algorithm is implemented on the light-weight IGN camera, which has an IMU sensor and a SoC/FPGA. To obtain the correct parameters for the resampling of images, the presented method accurately estimates the geometrical relation between the first and the Nth image, taking into account the internal parameters and the distortion of the camera. Features are detected in the first image by the FAST detector, than homologous points on other images are obtained by template matching aided by the IMU sensors. The SoC/FPGA in the camera is used to speed up time-consuming parts of the algorithm such as features detection and images resampling in order to achieve a real-time performance as we want to write only the resulting final image to save bandwidth on the storage device. The paper includes a detailed description of the implemented algorithm, resource usage summary, resulting processing time, resulting images, as well as block diagrams of the described architecture. The resulting stacked image obtained on real surveys doesn't seem visually impaired. Timing results demonstrate that our algorithm can be used in real-time since its processing time is less than the writing time of an image in the storage device. An interesting by-product of this algorithm is the 3D rotation estimated by a

  14. A Family of Algorithms for Computing Consensus about Node State from Network Data

    PubMed Central

    Brush, Eleanor R.; Krakauer, David C.; Flack, Jessica C.

    2013-01-01

    Biological and social networks are composed of heterogeneous nodes that contribute differentially to network structure and function. A number of algorithms have been developed to measure this variation. These algorithms have proven useful for applications that require assigning scores to individual nodes–from ranking websites to determining critical species in ecosystems–yet the mechanistic basis for why they produce good rankings remains poorly understood. We show that a unifying property of these algorithms is that they quantify consensus in the network about a node's state or capacity to perform a function. The algorithms capture consensus by either taking into account the number of a target node's direct connections, and, when the edges are weighted, the uniformity of its weighted in-degree distribution (breadth), or by measuring net flow into a target node (depth). Using data from communication, social, and biological networks we find that that how an algorithm measures consensus–through breadth or depth– impacts its ability to correctly score nodes. We also observe variation in sensitivity to source biases in interaction/adjacency matrices: errors arising from systematic error at the node level or direct manipulation of network connectivity by nodes. Our results indicate that the breadth algorithms, which are derived from information theory, correctly score nodes (assessed using independent data) and are robust to errors. However, in cases where nodes “form opinions” about other nodes using indirect information, like reputation, depth algorithms, like Eigenvector Centrality, are required. One caveat is that Eigenvector Centrality is not robust to error unless the network is transitive or assortative. In these cases the network structure allows the depth algorithms to effectively capture breadth as well as depth. Finally, we discuss the algorithms' cognitive and computational demands. This is an important consideration in systems in which

  15. Performance Analysis of Continuous Black-Box Optimization Algorithms via Footprints in Instance Space.

    PubMed

    Muñoz, Mario A; Smith-Miles, Kate A

    2017-01-01

    This article presents a method for the objective assessment of an algorithm's strengths and weaknesses. Instead of examining the performance of only one or more algorithms on a benchmark set, or generating custom problems that maximize the performance difference between two algorithms, our method quantifies both the nature of the test instances and the algorithm performance. Our aim is to gather information about possible phase transitions in performance, that is, the points in which a small change in problem structure produces algorithm failure. The method is based on the accurate estimation and characterization of the algorithm footprints, that is, the regions of instance space in which good or exceptional performance is expected from an algorithm. A footprint can be estimated for each algorithm and for the overall portfolio. Therefore, we select a set of features to generate a common instance space, which we validate by constructing a sufficiently accurate prediction model. We characterize the footprints by their area and density. Our method identifies complementary performance between algorithms, quantifies the common features of hard problems, and locates regions where a phase transition may lie.

  16. Exploring Algorithms for Stellar Light Curves With TESS

    NASA Astrophysics Data System (ADS)

    Buzasi, Derek

    2018-01-01

    The Kepler and K2 missions have produced tens of thousands of stellar light curves, which have been used to measure rotation periods, characterize photometric activity levels, and explore phenomena such as differential rotation. The quasi-periodic nature of rotational light curves, combined with the potential presence of additional periodicities not due to rotation, complicates the analysis of these time series and makes characterization of uncertainties difficult. A variety of algorithms have been used for the extraction of rotational signals, including autocorrelation functions, discrete Fourier transforms, Lomb-Scargle periodograms, wavelet transforms, and the Hilbert-Huang transform. In addition, in the case of K2 a number of different pipelines have been used to produce initial detrended light curves from the raw image frames.In the near future, TESS photometry, particularly that deriving from the full-frame images, will dramatically further expand the number of such light curves, but details of the pipeline to be used to produce photometry from the FFIs remain under development. K2 data offers us an opportunity to explore the utility of different reduction and analysis tool combinations applied to these astrophysically important tasks. In this work, we apply a wide range of algorithms to light curves produced by a number of popular K2 pipeline products to better understand the advantages and limitations of each approach and provide guidance for the most reliable and most efficient analysis of TESS stellar data.

  17. Novel medical image enhancement algorithms

    NASA Astrophysics Data System (ADS)

    Agaian, Sos; McClendon, Stephen A.

    2010-01-01

    In this paper, we present two novel medical image enhancement algorithms. The first, a global image enhancement algorithm, utilizes an alpha-trimmed mean filter as its backbone to sharpen images. The second algorithm uses a cascaded unsharp masking technique to separate the high frequency components of an image in order for them to be enhanced using a modified adaptive contrast enhancement algorithm. Experimental results from enhancing electron microscopy, radiological, CT scan and MRI scan images, using the MATLAB environment, are then compared to the original images as well as other enhancement methods, such as histogram equalization and two forms of adaptive contrast enhancement. An image processing scheme for electron microscopy images of Purkinje cells will also be implemented and utilized as a comparison tool to evaluate the performance of our algorithm.

  18. An ATR architecture for algorithm development and testing

    NASA Astrophysics Data System (ADS)

    Breivik, Gøril M.; Løkken, Kristin H.; Brattli, Alvin; Palm, Hans C.; Haavardsholm, Trym

    2013-05-01

    A research platform with four cameras in the infrared and visible spectral domains is under development at the Norwegian Defence Research Establishment (FFI). The platform will be mounted on a high-speed jet aircraft and will primarily be used for image acquisition and for development and test of automatic target recognition (ATR) algorithms. The sensors on board produce large amounts of data, the algorithms can be computationally intensive and the data processing is complex. This puts great demands on the system architecture; it has to run in real-time and at the same time be suitable for algorithm development. In this paper we present an architecture for ATR systems that is designed to be exible, generic and efficient. The architecture is module based so that certain parts, e.g. specific ATR algorithms, can be exchanged without affecting the rest of the system. The modules are generic and can be used in various ATR system configurations. A software framework in C++ that handles large data ows in non-linear pipelines is used for implementation. The framework exploits several levels of parallelism and lets the hardware processing capacity be fully utilised. The ATR system is under development and has reached a first level that can be used for segmentation algorithm development and testing. The implemented system consists of several modules, and although their content is still limited, the segmentation module includes two different segmentation algorithms that can be easily exchanged. We demonstrate the system by applying the two segmentation algorithms to infrared images from sea trial recordings.

  19. Interdisciplinary research produces results in understanding planetary dunes

    USGS Publications Warehouse

    Titus, Timothy N.; Hayward, Rosalyn K.; Dinwiddie, Cynthia L.

    2012-01-01

    Third International Planetary Dunes Workshop: Remote Sensing and Image Analysis of Planetary Dunes; Flagstaff, Arizona, 12–16 June 2012. This workshop, the third in a biennial series, was convened as a means of bringing together terrestrial and planetary researchers from diverse backgrounds with the goal of fostering collaborative interdisciplinary research. The small-group setting facilitated intensive discussions of many problems associated with aeolian processes on Earth, Mars, Venus, Titan, Triton, and Pluto. The workshop produced a list of key scientifc questions about planetary dune felds.

  20. What a Difference a Parameter Makes: a Psychophysical Comparison of Random Dot Motion Algorithms

    PubMed Central

    Pilly, Praveen K.; Seitz, Aaron R.

    2009-01-01

    Random dot motion (RDM) displays have emerged as one of the standard stimulus types employed in psychophysical and physiological studies of motion processing. RDMs are convenient because it is straightforward to manipulate the relative motion energy for a given motion direction in addition to stimulus parameters such as the speed, contrast, duration, density, aperture, etc. However, as widely as RDMs are employed so do they vary in their details of implementation. As a result, it is often difficult to make direct comparisons across studies employing different RDM algorithms and parameters. Here, we systematically measure the ability of human subjects to estimate motion direction for four commonly used RDM algorithms under a range of parameters in order to understand how these different algorithms compare in their perceptibility. We find that parametric and algorithmic differences can produce dramatically different performances. These effects, while surprising, can be understood in relationship to pertinent neurophysiological data regarding spatiotemporal displacement tuning properties of cells in area MT and how the tuning function changes with stimulus contrast and retinal eccentricity. These data help give a baseline by which different RDM algorithms can be compared, demonstrate a need for clearly reporting RDM details in the methods of papers, and also pose new constraints and challenges to models of motion direction processing. PMID:19336240

  1. One-dimensional swarm algorithm packaging

    NASA Astrophysics Data System (ADS)

    Lebedev, Boris K.; Lebedev, Oleg B.; Lebedeva, Ekaterina O.

    2018-05-01

    The paper considers an algorithm for solving the problem of onedimensional packaging based on the adaptive behavior model of an ant colony. The key role in the development of the ant algorithm is the choice of representation (interpretation) of the solution. The structure of the solution search graph, the procedure for finding solutions on the graph, the methods of deposition and evaporation of pheromone are described. Unlike the canonical paradigm of an ant algorithm, an ant on the solution search graph generates sets of elements distributed across blocks. Experimental studies were conducted on IBM PC. Compared with the existing algorithms, the results are improved.

  2. A new approach to optic disc detection in human retinal images using the firefly algorithm.

    PubMed

    Rahebi, Javad; Hardalaç, Fırat

    2016-03-01

    There are various methods and algorithms to detect the optic discs in retinal images. In recent years, much attention has been given to the utilization of the intelligent algorithms. In this paper, we present a new automated method of optic disc detection in human retinal images using the firefly algorithm. The firefly intelligent algorithm is an emerging intelligent algorithm that was inspired by the social behavior of fireflies. The population in this algorithm includes the fireflies, each of which has a specific rate of lighting or fitness. In this method, the insects are compared two by two, and the less attractive insects can be observed to move toward the more attractive insects. Finally, one of the insects is selected as the most attractive, and this insect presents the optimum response to the problem in question. Here, we used the light intensity of the pixels of the retinal image pixels instead of firefly lightings. The movement of these insects due to local fluctuations produces different light intensity values in the images. Because the optic disc is the brightest area in the retinal images, all of the insects move toward brightest area and thus specify the location of the optic disc in the image. The results of implementation show that proposed algorithm could acquire an accuracy rate of 100 % in DRIVE dataset, 95 % in STARE dataset, and 94.38 % in DiaRetDB1 dataset. The results of implementation reveal high capability and accuracy of proposed algorithm in the detection of the optic disc from retinal images. Also, recorded required time for the detection of the optic disc in these images is 2.13 s for DRIVE dataset, 2.81 s for STARE dataset, and 3.52 s for DiaRetDB1 dataset accordingly. These time values are average value.

  3. Dilated contour extraction and component labeling algorithm for object vector representation

    NASA Astrophysics Data System (ADS)

    Skourikhine, Alexei N.

    2005-08-01

    Object boundary extraction from binary images is important for many applications, e.g., image vectorization, automatic interpretation of images containing segmentation results, printed and handwritten documents and drawings, maps, and AutoCAD drawings. Efficient and reliable contour extraction is also important for pattern recognition due to its impact on shape-based object characterization and recognition. The presented contour tracing and component labeling algorithm produces dilated (sub-pixel) contours associated with corresponding regions. The algorithm has the following features: (1) it always produces non-intersecting, non-degenerate contours, including the case of one-pixel wide objects; (2) it associates the outer and inner (i.e., around hole) contours with the corresponding regions during the process of contour tracing in a single pass over the image; (3) it maintains desired connectivity of object regions as specified by 8-neighbor or 4-neighbor connectivity of adjacent pixels; (4) it avoids degenerate regions in both background and foreground; (5) it allows an easy augmentation that will provide information about the containment relations among regions; (6) it has a time complexity that is dominantly linear in the number of contour points. This early component labeling (contour-region association) enables subsequent efficient object-based processing of the image information.

  4. Tissue Probability Map Constrained 4-D Clustering Algorithm for Increased Accuracy and Robustness in Serial MR Brain Image Segmentation

    PubMed Central

    Xue, Zhong; Shen, Dinggang; Li, Hai; Wong, Stephen

    2010-01-01

    The traditional fuzzy clustering algorithm and its extensions have been successfully applied in medical image segmentation. However, because of the variability of tissues and anatomical structures, the clustering results might be biased by the tissue population and intensity differences. For example, clustering-based algorithms tend to over-segment white matter tissues of MR brain images. To solve this problem, we introduce a tissue probability map constrained clustering algorithm and apply it to serial MR brain image segmentation, i.e., a series of 3-D MR brain images of the same subject at different time points. Using the new serial image segmentation algorithm in the framework of the CLASSIC framework, which iteratively segments the images and estimates the longitudinal deformations, we improved both accuracy and robustness for serial image computing, and at the mean time produced longitudinally consistent segmentation and stable measures. In the algorithm, the tissue probability maps consist of both the population-based and subject-specific segmentation priors. Experimental study using both simulated longitudinal MR brain data and the Alzheimer’s Disease Neuroimaging Initiative (ADNI) data confirmed that using both priors more accurate and robust segmentation results can be obtained. The proposed algorithm can be applied in longitudinal follow up studies of MR brain imaging with subtle morphological changes for neurological disorders. PMID:26566399

  5. Directly data processing algorithm for multi-wavelength pyrometer (MWP).

    PubMed

    Xing, Jian; Peng, Bo; Ma, Zhao; Guo, Xin; Dai, Li; Gu, Weihong; Song, Wenlong

    2017-11-27

    Data processing of multi-wavelength pyrometer (MWP) is a difficult problem because unknown emissivity. So far some solutions developed generally assumed particular mathematical relations for emissivity versus wavelength or emissivity versus temperature. Due to the deviation between the hypothesis and actual situation, the inversion results can be seriously affected. So directly data processing algorithm of MWP that does not need to assume the spectral emissivity model in advance is main aim of the study. Two new data processing algorithms of MWP, Gradient Projection (GP) algorithm and Internal Penalty Function (IPF) algorithm, each of which does not require to fix emissivity model in advance, are proposed. The novelty core idea is that data processing problem of MWP is transformed into constraint optimization problem, then it can be solved by GP or IPF algorithms. By comparison of simulation results for some typical spectral emissivity models, it is found that IPF algorithm is superior to GP algorithm in terms of accuracy and efficiency. Rocket nozzle temperature experiment results show that true temperature inversion results from IPF algorithm agree well with the theoretical design temperature as well. So the proposed combination IPF algorithm with MWP is expected to be a directly data processing algorithm to clear up the unknown emissivity obstacle for MWP.

  6. Algorithms and data structures for automated change detection and classification of sidescan sonar imagery

    NASA Astrophysics Data System (ADS)

    Gendron, Marlin Lee

    During Mine Warfare (MIW) operations, MIW analysts perform change detection by visually comparing historical sidescan sonar imagery (SSI) collected by a sidescan sonar with recently collected SSI in an attempt to identify objects (which might be explosive mines) placed at sea since the last time the area was surveyed. This dissertation presents a data structure and three algorithms, developed by the author, that are part of an automated change detection and classification (ACDC) system. MIW analysts at the Naval Oceanographic Office, to reduce the amount of time to perform change detection, are currently using ACDC. The dissertation introductory chapter gives background information on change detection, ACDC, and describes how SSI is produced from raw sonar data. Chapter 2 presents the author's Geospatial Bitmap (GB) data structure, which is capable of storing information geographically and is utilized by the three algorithms. This chapter shows that a GB data structure used in a polygon-smoothing algorithm ran between 1.3--48.4x faster than a sparse matrix data structure. Chapter 3 describes the GB clustering algorithm, which is the author's repeatable, order-independent method for clustering. Results from tests performed in this chapter show that the time to cluster a set of points is not affected by the distribution or the order of the points. In Chapter 4, the author presents his real-time computer-aided detection (CAD) algorithm that automatically detects mine-like objects on the seafloor in SSI. The author ran his GB-based CAD algorithm on real SSI data, and results of these tests indicate that his real-time CAD algorithm performs comparably to or better than other non-real-time CAD algorithms. The author presents his computer-aided search (CAS) algorithm in Chapter 5. CAS helps MIW analysts locate mine-like features that are geospatially close to previously detected features. A comparison between the CAS and a great circle distance algorithm shows that the

  7. Magnetic resonance electrical impedance tomography (MREIT): simulation study of J-substitution algorithm.

    PubMed

    Kwon, Ohin; Woo, Eung Je; Yoon, Jeong-Rock; Seo, Jin Keun

    2002-02-01

    We developed a new image reconstruction algorithm for magnetic resonance electrical impedance tomography (MREIT). MREIT is a new EIT imaging technique integrated into magnetic resonance imaging (MRI) system. Based on the assumption that internal current density distribution is obtained using magnetic resonance imaging (MRI) technique, the new image reconstruction algorithm called J-substitution algorithm produces cross-sectional static images of resistivity (or conductivity) distributions. Computer simulations show that the spatial resolution of resistivity image is comparable to that of MRI. MREIT provides accurate high-resolution cross-sectional resistivity images making resistivity values of various human tissues available for many biomedical applications.

  8. Pediatric Brain Extraction Using Learning-based Meta-algorithm

    PubMed Central

    Shi, Feng; Wang, Li; Dai, Yakang; Gilmore, John H.; Lin, Weili; Shen, Dinggang

    2012-01-01

    Magnetic resonance imaging of pediatric brain provides valuable information for early brain development studies. Automated brain extraction is challenging due to the small brain size and dynamic change of tissue contrast in the developing brains. In this paper, we propose a novel Learning Algorithm for Brain Extraction and Labeling (LABEL) specially for the pediatric MR brain images. The idea is to perform multiple complementary brain extractions on a given testing image by using a meta-algorithm, including BET and BSE, where the parameters of each run of the meta-algorithm are effectively learned from the training data. Also, the representative subjects are selected as exemplars and used to guide brain extraction of new subjects in different age groups. We further develop a level-set based fusion method to combine multiple brain extractions together with a closed smooth surface for obtaining the final extraction. The proposed method has been extensively evaluated in subjects of three representative age groups, such as neonate (less than 2 months), infant (1–2 years), and child (5–18 years). Experimental results show that, with 45 subjects for training (15 neonates, 15 infant, and 15 children), the proposed method can produce more accurate brain extraction results on 246 testing subjects (75 neonates, 126 infants, and 45 children), i.e., at average Jaccard Index of 0.953, compared to those by BET (0.918), BSE (0.902), ROBEX (0.901), GCUT (0.856), and other fusion methods such as Majority Voting (0.919) and STAPLE (0.941). Along with the largely-improved computational efficiency, the proposed method demonstrates its ability of automated brain extraction for pediatric MR images in a large age range. PMID:22634859

  9. Evaluation of Various Radar Data Quality Control Algorithms Based on Accumulated Radar Rainfall Statistics

    NASA Technical Reports Server (NTRS)

    Robinson, Michael; Steiner, Matthias; Wolff, David B.; Ferrier, Brad S.; Kessinger, Cathy; Einaudi, Franco (Technical Monitor)

    2000-01-01

    The primary function of the TRMM Ground Validation (GV) Program is to create GV rainfall products that provide basic validation of satellite-derived precipitation measurements for select primary sites. A fundamental and extremely important step in creating high-quality GV products is radar data quality control. Quality control (QC) processing of TRMM GV radar data is based on some automated procedures, but the current QC algorithm is not fully operational and requires significant human interaction to assure satisfactory results. Moreover, the TRMM GV QC algorithm, even with continuous manual tuning, still can not completely remove all types of spurious echoes. In an attempt to improve the current operational radar data QC procedures of the TRMM GV effort, an intercomparison of several QC algorithms has been conducted. This presentation will demonstrate how various radar data QC algorithms affect accumulated radar rainfall products. In all, six different QC algorithms will be applied to two months of WSR-88D radar data from Melbourne, Florida. Daily, five-day, and monthly accumulated radar rainfall maps will be produced for each quality-controlled data set. The QC algorithms will be evaluated and compared based on their ability to remove spurious echoes without removing significant precipitation. Strengths and weaknesses of each algorithm will be assessed based on, their abilit to mitigate both erroneous additions and reductions in rainfall accumulation from spurious echo contamination and true precipitation removal, respectively. Contamination from individual spurious echo categories will be quantified to further diagnose the abilities of each radar QC algorithm. Finally, a cost-benefit analysis will be conducted to determine if a more automated QC algorithm is a viable alternative to the current, labor-intensive QC algorithm employed by TRMM GV.

  10. A novel evaluation of two related and two independent algorithms for eye movement classification during reading.

    PubMed

    Friedman, Lee; Rigas, Ioannis; Abdulin, Evgeny; Komogortsev, Oleg V

    2018-05-15

    Nystrӧm and Holmqvist have published a method for the classification of eye movements during reading (ONH) (Nyström & Holmqvist, 2010). When we applied this algorithm to our data, the results were not satisfactory, so we modified the algorithm (now the MNH) to better classify our data. The changes included: (1) reducing the amount of signal filtering, (2) excluding a new type of noise, (3) removing several adaptive thresholds and replacing them with fixed thresholds, (4) changing the way that the start and end of each saccade was determined, (5) employing a new algorithm for detecting PSOs, and (6) allowing a fixation period to either begin or end with noise. A new method for the evaluation of classification algorithms is presented. It was designed to provide comprehensive feedback to an algorithm developer, in a time-efficient manner, about the types and numbers of classification errors that an algorithm produces. This evaluation was conducted by three expert raters independently, across 20 randomly chosen recordings, each classified by both algorithms. The MNH made many fewer errors in determining when saccades start and end, and it also detected some fixations and saccades that the ONH did not. The MNH fails to detect very small saccades. We also evaluated two additional algorithms: the EyeLink Parser and a more current, machine-learning-based algorithm. The EyeLink Parser tended to find more saccades that ended too early than did the other methods, and we found numerous problems with the output of the machine-learning-based algorithm.

  11. Implementation and testing of a sensor-netting algorithm for early warning and high confidence C/B threat detection

    NASA Astrophysics Data System (ADS)

    Gruber, Thomas; Grim, Larry; Fauth, Ryan; Tercha, Brian; Powell, Chris; Steinhardt, Kristin

    2011-05-01

    Large networks of disparate chemical/biological (C/B) sensors, MET sensors, and intelligence, surveillance, and reconnaissance (ISR) sensors reporting to various command/display locations can lead to conflicting threat information, questions of alarm confidence, and a confused situational awareness. Sensor netting algorithms (SNA) are being developed to resolve these conflicts and to report high confidence consensus threat map data products on a common operating picture (COP) display. A data fusion algorithm design was completed in a Phase I SBIR effort and development continues in the Phase II SBIR effort. The initial implementation and testing of the algorithm has produced some performance results. The algorithm accepts point and/or standoff sensor data, and event detection data (e.g., the location of an explosion) from various ISR sensors (e.g., acoustic, infrared cameras, etc.). These input data are preprocessed to assign estimated uncertainty to each incoming piece of data. The data are then sent to a weighted tomography process to obtain a consensus threat map, including estimated threat concentration level uncertainty. The threat map is then tested for consistency and the overall confidence for the map result is estimated. The map and confidence results are displayed on a COP. The benefits of a modular implementation of the algorithm and comparisons of fused / un-fused data results will be presented. The metrics for judging the sensor-netting algorithm performance are warning time, threat map accuracy (as compared to ground truth), false alarm rate, and false alarm rate v. reported threat confidence level.

  12. An improved affine projection algorithm for active noise cancellation

    NASA Astrophysics Data System (ADS)

    Zhang, Congyan; Wang, Mingjiang; Han, Yufei; Sun, Yunzhuo

    2017-08-01

    Affine projection algorithm is a signal reuse algorithm, and it has a good convergence rate compared to other traditional adaptive filtering algorithm. There are two factors that affect the performance of the algorithm, which are step factor and the projection length. In the paper, we propose a new variable step size affine projection algorithm (VSS-APA). It dynamically changes the step size according to certain rules, so that it can get smaller steady-state error and faster convergence speed. Simulation results can prove that its performance is superior to the traditional affine projection algorithm and in the active noise control (ANC) applications, the new algorithm can get very good results.

  13. Towards improving the NASA standard soil moisture retrieval algorithm and product

    NASA Astrophysics Data System (ADS)

    Mladenova, I. E.; Jackson, T. J.; Njoku, E. G.; Bindlish, R.; Cosh, M. H.; Chan, S.

    2013-12-01

    Soil moisture mapping using passive-based microwave remote sensing techniques has proven to be one of the most effective ways of acquiring reliable global soil moisture information on a routine basis. An important step in this direction was made by the launch of the Advanced Microwave Scanning Radiometer on the NASA's Earth Observing System Aqua satellite (AMSR-E). Along with the standard NASA algorithm and operational AMSR-E product, the easy access and availability of the AMSR-E data promoted the development and distribution of alternative retrieval algorithms and products. Several evaluation studies have demonstrated issues with the standard NASA AMSR-E product such as dampened temporal response and limited range of the final retrievals and noted that the available global passive-based algorithms, even though based on the same electromagnetic principles, produce different results in terms of accuracy and temporal dynamics. Our goal is to identify the theoretical causes that determine the reduced sensitivity of the NASA AMSR-E product and outline ways to improve the operational NASA algorithm, if possible. Properly identifying the underlying reasons that cause the above mentioned features of the NASA AMSR-E product and differences between the alternative algorithms requires a careful examination of the theoretical basis of each approach. Specifically, the simplifying assumptions and parametrization approaches adopted by each algorithm to reduce the dimensionality of unknowns and characterize the observing system. Statistically-based error analyses, which are useful and necessary, provide information on the relative accuracy of each product but give very little information on the theoretical causes, knowledge that is essential for algorithm improvement. Thus, we are currently examining the possibility of improving the standard NASA AMSR-E global soil moisture product by conducting a thorough theoretically-based review of and inter-comparisons between several well

  14. An Evaluation of a Flight Deck Interval Management Algorithm Including Delayed Target Trajectories

    NASA Technical Reports Server (NTRS)

    Swieringa, Kurt A.; Underwood, Matthew C.; Barmore, Bryan; Leonard, Robert D.

    2014-01-01

    NASA's first Air Traffic Management (ATM) Technology Demonstration (ATD-1) was created to facilitate the transition of mature air traffic management technologies from the laboratory to operational use. The technologies selected for demonstration are the Traffic Management Advisor with Terminal Metering (TMA-TM), which provides precise timebased scheduling in the terminal airspace; Controller Managed Spacing (CMS), which provides controllers with decision support tools enabling precise schedule conformance; and Interval Management (IM), which consists of flight deck automation that enables aircraft to achieve or maintain precise in-trail spacing. During high demand operations, TMA-TM may produce a schedule and corresponding aircraft trajectories that include delay to ensure that a particular aircraft will be properly spaced from other aircraft at each schedule waypoint. These delayed trajectories are not communicated to the automation onboard the aircraft, forcing the IM aircraft to use the published speeds to estimate the target aircraft's estimated time of arrival. As a result, the aircraft performing IM operations may follow an aircraft whose TMA-TM generated trajectories have substantial speed deviations from the speeds expected by the spacing algorithm. Previous spacing algorithms were not designed to handle this magnitude of uncertainty. A simulation was conducted to examine a modified spacing algorithm with the ability to follow aircraft flying delayed trajectories. The simulation investigated the use of the new spacing algorithm with various delayed speed profiles and wind conditions, as well as several other variables designed to simulate real-life variability. The results and conclusions of this study indicate that the new spacing algorithm generally exhibits good performance; however, some types of target aircraft speed profiles can cause the spacing algorithm to command less than optimal speed control behavior.

  15. Families of Graph Algorithms: SSSP Case Study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kanewala Appuhamilage, Thejaka Amila Jay; Zalewski, Marcin J.; Lumsdaine, Andrew

    2017-08-28

    Single-Source Shortest Paths (SSSP) is a well-studied graph problem. Examples of SSSP algorithms include the original Dijkstra’s algorithm and the parallel Δ-stepping and KLA-SSSP algorithms. In this paper, we use a novel Abstract Graph Machine (AGM) model to show that all these algorithms share a common logic and differ from one another by the order in which they perform work. We use the AGM model to thoroughly analyze the family of algorithms that arises from the common logic. We start with the basic algorithm without any ordering (Chaotic), and then we derive the existing and new algorithms by methodically exploringmore » semantic and spatial ordering of work. Our experimental results show that new derived algorithms show better performance than the existing distributed memory parallel algorithms, especially at higher scales.« less

  16. A generalized LSTM-like training algorithm for second-order recurrent neural networks

    PubMed Central

    Monner, Derek; Reggia, James A.

    2011-01-01

    The Long Short Term Memory (LSTM) is a second-order recurrent neural network architecture that excels at storing sequential short-term memories and retrieving them many time-steps later. LSTM’s original training algorithm provides the important properties of spatial and temporal locality, which are missing from other training approaches, at the cost of limiting it’s applicability to a small set of network architectures. Here we introduce the Generalized Long Short-Term Memory (LSTM-g) training algorithm, which provides LSTM-like locality while being applicable without modification to a much wider range of second-order network architectures. With LSTM-g, all units have an identical set of operating instructions for both activation and learning, subject only to the configuration of their local environment in the network; this is in contrast to the original LSTM training algorithm, where each type of unit has its own activation and training instructions. When applied to LSTM architectures with peephole connections, LSTM-g takes advantage of an additional source of back-propagated error which can enable better performance than the original algorithm. Enabled by the broad architectural applicability of LSTM-g, we demonstrate that training recurrent networks engineered for specific tasks can produce better results than single-layer networks. We conclude that LSTM-g has the potential to both improve the performance and broaden the applicability of spatially and temporally local gradient-based training algorithms for recurrent neural networks. PMID:21803542

  17. Determining residual reduction algorithm kinematic tracking weights for a sidestep cut via numerical optimization.

    PubMed

    Samaan, Michael A; Weinhandl, Joshua T; Bawab, Sebastian Y; Ringleb, Stacie I

    2016-12-01

    Musculoskeletal modeling allows for the determination of various parameters during dynamic maneuvers by using in vivo kinematic and ground reaction force (GRF) data as inputs. Differences between experimental and model marker data and inconsistencies in the GRFs applied to these musculoskeletal models may not produce accurate simulations. Therefore, residual forces and moments are applied to these models in order to reduce these differences. Numerical optimization techniques can be used to determine optimal tracking weights of each degree of freedom of a musculoskeletal model in order to reduce differences between the experimental and model marker data as well as residual forces and moments. In this study, the particle swarm optimization (PSO) and simplex simulated annealing (SIMPSA) algorithms were used to determine optimal tracking weights for the simulation of a sidestep cut. The PSO and SIMPSA algorithms were able to produce model kinematics that were within 1.4° of experimental kinematics with residual forces and moments of less than 10 N and 18 Nm, respectively. The PSO algorithm was able to replicate the experimental kinematic data more closely and produce more dynamically consistent kinematic data for a sidestep cut compared to the SIMPSA algorithm. Future studies should use external optimization routines to determine dynamically consistent kinematic data and report the differences between experimental and model data for these musculoskeletal simulations.

  18. Initial Results of an MDO Method Evaluation Study

    NASA Technical Reports Server (NTRS)

    Alexandrov, Natalia M.; Kodiyalam, Srinivas

    1998-01-01

    The NASA Langley MDO method evaluation study seeks to arrive at a set of guidelines for using promising MDO methods by accumulating and analyzing computational data for such methods. The data are collected by conducting a series of re- producible experiments. In the first phase of the study, three MDO methods were implemented in the SIGHT: framework and used to solve a set of ten relatively simple problems. In this paper, we comment on the general considerations for conducting method evaluation studies and report some initial results obtained to date. In particular, although the results are not conclusive because of the small initial test set, other formulations, optimality conditions, and sensitivity of solutions to various perturbations. Optimization algorithms are used to solve a particular MDO formulation. It is then appropriate to speak of local convergence rates and of global convergence properties of an optimization algorithm applied to a specific formulation. An analogous distinction exists in the field of partial differential equations. On the one hand, equations are analyzed in terms of regularity, well-posedness, and the existence and unique- ness of solutions. On the other, one considers numerous algorithms for solving differential equations. The area of MDO methods studies MDO formulations combined with optimization algorithms, although at times the distinction is blurred. It is important to

  19. Evolving land cover classification algorithms for multispectral and multitemporal imagery

    NASA Astrophysics Data System (ADS)

    Brumby, Steven P.; Theiler, James P.; Bloch, Jeffrey J.; Harvey, Neal R.; Perkins, Simon J.; Szymanski, John J.; Young, Aaron C.

    2002-01-01

    The Cerro Grande/Los Alamos forest fire devastated over 43,000 acres (17,500 ha) of forested land, and destroyed over 200 structures in the town of Los Alamos and the adjoining Los Alamos National Laboratory. The need to measure the continuing impact of the fire on the local environment has led to the application of a number of remote sensing technologies. During and after the fire, remote-sensing data was acquired from a variety of aircraft- and satellite-based sensors, including Landsat 7 Enhanced Thematic Mapper (ETM+). We now report on the application of a machine learning technique to the automated classification of land cover using multi-spectral and multi-temporal imagery. We apply a hybrid genetic programming/supervised classification technique to evolve automatic feature extraction algorithms. We use a software package we have developed at Los Alamos National Laboratory, called GENIE, to carry out this evolution. We use multispectral imagery from the Landsat 7 ETM+ instrument from before, during, and after the wildfire. Using an existing land cover classification based on a 1992 Landsat 5 TM scene for our training data, we evolve algorithms that distinguish a range of land cover categories, and an algorithm to mask out clouds and cloud shadows. We report preliminary results of combining individual classification results using a K-means clustering approach. The details of our evolved classification are compared to the manually produced land-cover classification.

  20. Improved Bat Algorithm Applied to Multilevel Image Thresholding

    PubMed Central

    2014-01-01

    Multilevel image thresholding is a very important image processing technique that is used as a basis for image segmentation and further higher level processing. However, the required computational time for exhaustive search grows exponentially with the number of desired thresholds. Swarm intelligence metaheuristics are well known as successful and efficient optimization methods for intractable problems. In this paper, we adjusted one of the latest swarm intelligence algorithms, the bat algorithm, for the multilevel image thresholding problem. The results of testing on standard benchmark images show that the bat algorithm is comparable with other state-of-the-art algorithms. We improved standard bat algorithm, where our modifications add some elements from the differential evolution and from the artificial bee colony algorithm. Our new proposed improved bat algorithm proved to be better than five other state-of-the-art algorithms, improving quality of results in all cases and significantly improving convergence speed. PMID:25165733

  1. Modified artificial bee colony algorithm for reactive power optimization

    NASA Astrophysics Data System (ADS)

    Sulaiman, Noorazliza; Mohamad-Saleh, Junita; Abro, Abdul Ghani

    2015-05-01

    Bio-inspired algorithms (BIAs) implemented to solve various optimization problems have shown promising results which are very important in this severely complex real-world. Artificial Bee Colony (ABC) algorithm, a kind of BIAs has demonstrated tremendous results as compared to other optimization algorithms. This paper presents a new modified ABC algorithm referred to as JA-ABC3 with the aim to enhance convergence speed and avoid premature convergence. The proposed algorithm has been simulated on ten commonly used benchmarks functions. Its performance has also been compared with other existing ABC variants. To justify its robust applicability, the proposed algorithm has been tested to solve Reactive Power Optimization problem. The results have shown that the proposed algorithm has superior performance to other existing ABC variants e.g. GABC, BABC1, BABC2, BsfABC dan IABC in terms of convergence speed. Furthermore, the proposed algorithm has also demonstrated excellence performance in solving Reactive Power Optimization problem.

  2. A Methodology for Evaluating Artifacts Produced by a Formal Verification Process

    NASA Technical Reports Server (NTRS)

    Siminiceanu, Radu I.; Miner, Paul S.; Person, Suzette

    2011-01-01

    The goal of this study is to produce a methodology for evaluating the claims and arguments employed in, and the evidence produced by formal verification activities. To illustrate the process, we conduct a full assessment of a representative case study for the Enabling Technology Development and Demonstration (ETDD) program. We assess the model checking and satisfiabilty solving techniques as applied to a suite of abstract models of fault tolerant algorithms which were selected to be deployed in Orion, namely the TTEthernet startup services specified and verified in the Symbolic Analysis Laboratory (SAL) by TTTech. To this end, we introduce the Modeling and Verification Evaluation Score (MVES), a metric that is intended to estimate the amount of trust that can be placed on the evidence that is obtained. The results of the evaluation process and the MVES can then be used by non-experts and evaluators in assessing the credibility of the verification results.

  3. Diversity in Detection Algorithms for Atmospheric Rivers: A Community Effort to Understand the Consequences

    NASA Astrophysics Data System (ADS)

    Shields, C. A.; Ullrich, P. A.; Rutz, J. J.; Wehner, M. F.; Ralph, M.; Ruby, L.

    2017-12-01

    Atmospheric rivers (ARs) are long, narrow filamentary structures that transport large amounts of moisture in the lower layers of the atmosphere, typically from subtropical regions to mid-latitudes. ARs play an important role in regional hydroclimate by supplying significant amounts of precipitation that can alleviate drought, or in extreme cases, produce dangerous floods. Accurately detecting, or tracking, ARs is important not only for weather forecasting, but is also necessary to understand how these events may change under global warming. Detection algorithms are used on both regional and global scales, and most accurately, using high resolution datasets, or model output. Different detection algorithms can produce different answers. Detection algorithms found in the current literature fall broadly into two categories: "time-stitching", where the AR is tracked with a Lagrangian approach through time and space; and "counting", where ARs are identified for a single point in time for a single location. Counting routines can be further subdivided into algorithms that use absolute thresholds with specific geometry, to algorithms that use relative thresholds, to algorithms based on statistics, to pattern recognition and machine learning techniques. With such a large diversity in detection code, differences in AR tracking and "counts" can vary widely from technique to technique. Uncertainty increases for future climate scenarios, where the difference between relative and absolute thresholding produce vastly different counts, simply due to the moister background state in a warmer world. In an effort to quantify the uncertainty associated with tracking algorithms, the AR detection community has come together to participate in ARTMIP, the Atmospheric River Tracking Method Intercomparison Project. Each participant will provide AR metrics to the greater group by applying their code to a common reanalysis dataset. MERRA2 data was chosen for both temporal and spatial resolution

  4. Algorithms and Libraries

    NASA Technical Reports Server (NTRS)

    Dongarra, Jack

    1998-01-01

    This exploratory study initiated our inquiry into algorithms and applications that would benefit by latency tolerant approach to algorithm building, including the construction of new algorithms where appropriate. In a multithreaded execution, when a processor reaches a point where remote memory access is necessary, the request is sent out on the network and a context--switch occurs to a new thread of computation. This effectively masks a long and unpredictable latency due to remote loads, thereby providing tolerance to remote access latency. We began to develop standards to profile various algorithm and application parameters, such as the degree of parallelism, granularity, precision, instruction set mix, interprocessor communication, latency etc. These tools will continue to develop and evolve as the Information Power Grid environment matures. To provide a richer context for this research, the project also focused on issues of fault-tolerance and computation migration of numerical algorithms and software. During the initial phase we tried to increase our understanding of the bottlenecks in single processor performance. Our work began by developing an approach for the automatic generation and optimization of numerical software for processors with deep memory hierarchies and pipelined functional units. Based on the results we achieved in this study we are planning to study other architectures of interest, including development of cost models, and developing code generators appropriate to these architectures.

  5. Combination of graph heuristics in producing initial solution of curriculum based course timetabling problem

    NASA Astrophysics Data System (ADS)

    Wahid, Juliana; Hussin, Naimah Mohd

    2016-08-01

    The construction of population of initial solution is a crucial task in population-based metaheuristic approach for solving curriculum-based university course timetabling problem because it can affect the convergence speed and also the quality of the final solution. This paper presents an exploration on combination of graph heuristics in construction approach in curriculum based course timetabling problem to produce a population of initial solutions. The graph heuristics were set as single and combination of two heuristics. In addition, several ways of assigning courses into room and timeslot are implemented. All settings of heuristics are then tested on the same curriculum based course timetabling problem instances and are compared with each other in terms of number of population produced. The result shows that combination of saturation degree followed by largest degree heuristic produce the highest number of population of initial solutions. The results from this study can be used in the improvement phase of algorithm that uses population of initial solutions.

  6. DNA-based watermarks using the DNA-Crypt algorithm.

    PubMed

    Heider, Dominik; Barnekow, Angelika

    2007-05-29

    The aim of this paper is to demonstrate the application of watermarks based on DNA sequences to identify the unauthorized use of genetically modified organisms (GMOs) protected by patents. Predicted mutations in the genome can be corrected by the DNA-Crypt program leaving the encrypted information intact. Existing DNA cryptographic and steganographic algorithms use synthetic DNA sequences to store binary information however, although these sequences can be used for authentication, they may change the target DNA sequence when introduced into living organisms. The DNA-Crypt algorithm and image steganography are based on the same watermark-hiding principle, namely using the least significant base in case of DNA-Crypt and the least significant bit in case of the image steganography. It can be combined with binary encryption algorithms like AES, RSA or Blowfish. DNA-Crypt is able to correct mutations in the target DNA with several mutation correction codes such as the Hamming-code or the WDH-code. Mutations which can occur infrequently may destroy the encrypted information, however an integrated fuzzy controller decides on a set of heuristics based on three input dimensions, and recommends whether or not to use a correction code. These three input dimensions are the length of the sequence, the individual mutation rate and the stability over time, which is represented by the number of generations. In silico experiments using the Ypt7 in Saccharomyces cerevisiae shows that the DNA watermarks produced by DNA-Crypt do not alter the translation of mRNA into protein. The program is able to store watermarks in living organisms and can maintain the original information by correcting mutations itself. Pairwise or multiple sequence alignments show that DNA-Crypt produces few mismatches between the sequences similar to all steganographic algorithms.

  7. Intelligent Use of CFAR Algorithms

    DTIC Science & Technology

    1993-05-01

    the reference windows can raise the threshold too high in many CFAR algorithms and result in masking of targets. GCMLD is a modification of CMLD that...AD-A267 755 RL-TR-93-75 III 11 III II liiI Interim Report May 1993 INTELLIGENT USE OF CFAR ALGORITHMS Kaman Sciences Corporation P. Antonik, B...AND DATES COVERED IMay 1993 Inte ’rim Jan 92 - Se2 92 4. TITLE AND SUBTITLE 5. FUNDING NUMBERS INTELLIGENT USE OF CFAR ALGORITHMS C - F30602-91-C-0017

  8. Preliminary Analysis of a Breadth-First Parsing Algorithm: Theoretical and Experimental Results.

    DTIC Science & Technology

    1981-06-01

    present discussion we will assume that phrases have one or two daughters, or more formally, that the grammar is in Chomsky Normal Form [1].) This... grammar point of view, these pairs contrast Chomsky Normal Form [1] with Categorial Grammars [2], and from a representational point of view, these pairs...chart(i, k) * chart(k, j) bottom-up ( Chomsky Normal Form) (9) chart(k, j) = chart(i, ) top-down (Categorial Grammars )chart(i, k) Earley’s Algorithm [8

  9. Recognition of plant parts with problem-specific algorithms

    NASA Astrophysics Data System (ADS)

    Schwanke, Joerg; Brendel, Thorsten; Jensch, Peter F.; Megnet, Roland

    1994-06-01

    Automatic micropropagation is necessary to produce cost-effective high amounts of biomass. Juvenile plants are dissected in clean- room environment on particular points on the stem or the leaves. A vision-system detects possible cutting points and controls a specialized robot. This contribution is directed to the pattern- recognition algorithms to detect structural parts of the plant.

  10. Fireworks algorithm for mean-VaR/CVaR models

    NASA Astrophysics Data System (ADS)

    Zhang, Tingting; Liu, Zhifeng

    2017-10-01

    Intelligent algorithms have been widely applied to portfolio optimization problems. In this paper, we introduce a novel intelligent algorithm, named fireworks algorithm, to solve the mean-VaR/CVaR model for the first time. The results show that, compared with the classical genetic algorithm, fireworks algorithm not only improves the optimization accuracy and the optimization speed, but also makes the optimal solution more stable. We repeat our experiments at different confidence levels and different degrees of risk aversion, and the results are robust. It suggests that fireworks algorithm has more advantages than genetic algorithm in solving the portfolio optimization problem, and it is feasible and promising to apply it into this field.

  11. DNABIT Compress – Genome compression algorithm

    PubMed Central

    Rajarajeswari, Pothuraju; Apparao, Allam

    2011-01-01

    Data compression is concerned with how information is organized in data. Efficient storage means removal of redundancy from the data being stored in the DNA molecule. Data compression algorithms remove redundancy and are used to understand biologically important molecules. We present a compression algorithm, “DNABIT Compress” for DNA sequences based on a novel algorithm of assigning binary bits for smaller segments of DNA bases to compress both repetitive and non repetitive DNA sequence. Our proposed algorithm achieves the best compression ratio for DNA sequences for larger genome. Significantly better compression results show that “DNABIT Compress” algorithm is the best among the remaining compression algorithms. While achieving the best compression ratios for DNA sequences (Genomes),our new DNABIT Compress algorithm significantly improves the running time of all previous DNA compression programs. Assigning binary bits (Unique BIT CODE) for (Exact Repeats, Reverse Repeats) fragments of DNA sequence is also a unique concept introduced in this algorithm for the first time in DNA compression. This proposed new algorithm could achieve the best compression ratio as much as 1.58 bits/bases where the existing best methods could not achieve a ratio less than 1.72 bits/bases. PMID:21383923

  12. Trees, bialgebras and intrinsic numerical algorithms

    NASA Technical Reports Server (NTRS)

    Crouch, Peter; Grossman, Robert; Larson, Richard

    1990-01-01

    Preliminary work about intrinsic numerical integrators evolving on groups is described. Fix a finite dimensional Lie group G; let g denote its Lie algebra, and let Y(sub 1),...,Y(sub N) denote a basis of g. A class of numerical algorithms is presented that approximate solutions to differential equations evolving on G of the form: dot-x(t) = F(x(t)), x(0) = p is an element of G. The algorithms depend upon constants c(sub i) and c(sub ij), for i = 1,...,k and j is less than i. The algorithms have the property that if the algorithm starts on the group, then it remains on the group. In addition, they also have the property that if G is the abelian group R(N), then the algorithm becomes the classical Runge-Kutta algorithm. The Cayley algebra generated by labeled, ordered trees is used to generate the equations that the coefficients c(sub i) and c(sub ij) must satisfy in order for the algorithm to yield an rth order numerical integrator and to analyze the resulting algorithms.

  13. Research on Palmprint Identification Method Based on Quantum Algorithms

    PubMed Central

    Zhang, Zhanzhan

    2014-01-01

    Quantum image recognition is a technology by using quantum algorithm to process the image information. It can obtain better effect than classical algorithm. In this paper, four different quantum algorithms are used in the three stages of palmprint recognition. First, quantum adaptive median filtering algorithm is presented in palmprint filtering processing. Quantum filtering algorithm can get a better filtering result than classical algorithm through the comparison. Next, quantum Fourier transform (QFT) is used to extract pattern features by only one operation due to quantum parallelism. The proposed algorithm exhibits an exponential speed-up compared with discrete Fourier transform in the feature extraction. Finally, quantum set operations and Grover algorithm are used in palmprint matching. According to the experimental results, quantum algorithm only needs to apply square of N operations to find out the target palmprint, but the traditional method needs N times of calculation. At the same time, the matching accuracy of quantum algorithm is almost 100%. PMID:25105165

  14. Effects of visualization on algorithm comprehension

    NASA Astrophysics Data System (ADS)

    Mulvey, Matthew

    Computer science students are expected to learn and apply a variety of core algorithms which are an essential part of the field. Any one of these algorithms by itself is not necessarily extremely complex, but remembering the large variety of algorithms and the differences between them is challenging. To address this challenge, we present a novel algorithm visualization tool designed to enhance students understanding of Dijkstra's algorithm by allowing them to discover the rules of the algorithm for themselves. It is hoped that a deeper understanding of the algorithm will help students correctly select, adapt and apply the appropriate algorithm when presented with a problem to solve, and that what is learned here will be applicable to the design of other visualization tools designed to teach different algorithms. Our visualization tool is currently in the prototype stage, and this thesis will discuss the pedagogical approach that informs its design, as well as the results of some initial usability testing. Finally, to clarify the direction for further development of the tool, four different variations of the prototype were implemented, and the instructional effectiveness of each was assessed by having a small sample participants use the different versions of the prototype and then take a quiz to assess their comprehension of the algorithm.

  15. Application of hybrid clustering using parallel k-means algorithm and DIANA algorithm

    NASA Astrophysics Data System (ADS)

    Umam, Khoirul; Bustamam, Alhadi; Lestari, Dian

    2017-03-01

    DNA is one of the carrier of genetic information of living organisms. Encoding, sequencing, and clustering DNA sequences has become the key jobs and routine in the world of molecular biology, in particular on bioinformatics application. There are two type of clustering, hierarchical clustering and partitioning clustering. In this paper, we combined two type clustering i.e. K-Means (partitioning clustering) and DIANA (hierarchical clustering), therefore it called Hybrid clustering. Application of hybrid clustering using Parallel K-Means algorithm and DIANA algorithm used to clustering DNA sequences of Human Papillomavirus (HPV). The clustering process is started with Collecting DNA sequences of HPV are obtained from NCBI (National Centre for Biotechnology Information), then performing characteristics extraction of DNA sequences. The characteristics extraction result is store in a matrix form, then normalize this matrix using Min-Max normalization and calculate genetic distance using Euclidian Distance. Furthermore, the hybrid clustering is applied by using implementation of Parallel K-Means algorithm and DIANA algorithm. The aim of using Hybrid Clustering is to obtain better clusters result. For validating the resulted clusters, to get optimum number of clusters, we use Davies-Bouldin Index (DBI). In this study, the result of implementation of Parallel K-Means clustering is data clustered become 5 clusters with minimal IDB value is 0.8741, and Hybrid Clustering clustered data become 13 sub-clusters with minimal IDB values = 0.8216, 0.6845, 0.3331, 0.1994 and 0.3952. The IDB value of hybrid clustering less than IBD value of Parallel K-Means clustering only that perform at 1ts stage. Its means clustering using Hybrid Clustering have the better result to clustered DNA sequence of HPV than perform parallel K-Means Clustering only.

  16. Algorithm aversion: people erroneously avoid algorithms after seeing them err.

    PubMed

    Dietvorst, Berkeley J; Simmons, Joseph P; Massey, Cade

    2015-02-01

    Research shows that evidence-based algorithms more accurately predict the future than do human forecasters. Yet when forecasters are deciding whether to use a human forecaster or a statistical algorithm, they often choose the human forecaster. This phenomenon, which we call algorithm aversion, is costly, and it is important to understand its causes. We show that people are especially averse to algorithmic forecasters after seeing them perform, even when they see them outperform a human forecaster. This is because people more quickly lose confidence in algorithmic than human forecasters after seeing them make the same mistake. In 5 studies, participants either saw an algorithm make forecasts, a human make forecasts, both, or neither. They then decided whether to tie their incentives to the future predictions of the algorithm or the human. Participants who saw the algorithm perform were less confident in it, and less likely to choose it over an inferior human forecaster. This was true even among those who saw the algorithm outperform the human.

  17. Arctic lead detection using a waveform mixture algorithm from CryoSat-2 data

    NASA Astrophysics Data System (ADS)

    Lee, Sanggyun; Kim, Hyun-cheol; Im, Jungho

    2018-05-01

    We propose a waveform mixture algorithm to detect leads from CryoSat-2 data, which is novel and different from the existing threshold-based lead detection methods. The waveform mixture algorithm adopts the concept of spectral mixture analysis, which is widely used in the field of hyperspectral image analysis. This lead detection method was evaluated with high-resolution (250 m) MODIS images and showed comparable and promising performance in detecting leads when compared to the previous methods. The robustness of the proposed approach also lies in the fact that it does not require the rescaling of parameters (i.e., stack standard deviation, stack skewness, stack kurtosis, pulse peakiness, and backscatter σ0), as it directly uses L1B waveform data, unlike the existing threshold-based methods. Monthly lead fraction maps were produced by the waveform mixture algorithm, which shows interannual variability of recent sea ice cover during 2011-2016, excluding the summer season (i.e., June to September). We also compared the lead fraction maps to other lead fraction maps generated from previously published data sets, resulting in similar spatiotemporal patterns.

  18. Hybrid employment recommendation algorithm based on Spark

    NASA Astrophysics Data System (ADS)

    Li, Zuoquan; Lin, Yubei; Zhang, Xingming

    2017-08-01

    Aiming at the real-time application of collaborative filtering employment recommendation algorithm (CF), a clustering collaborative filtering recommendation algorithm (CCF) is developed, which applies hierarchical clustering to CF and narrows the query range of neighbour items. In addition, to solve the cold-start problem of content-based recommendation algorithm (CB), a content-based algorithm with users’ information (CBUI) is introduced for job recommendation. Furthermore, a hybrid recommendation algorithm (HRA) which combines CCF and CBUI algorithms is proposed, and implemented on Spark platform. The experimental results show that HRA can overcome the problems of cold start and data sparsity, and achieve good recommendation accuracy and scalability for employment recommendation.

  19. Designing a parallel evolutionary algorithm for inferring gene networks on the cloud computing environment

    PubMed Central

    2014-01-01

    Background To improve the tedious task of reconstructing gene networks through testing experimentally the possible interactions between genes, it becomes a trend to adopt the automated reverse engineering procedure instead. Some evolutionary algorithms have been suggested for deriving network parameters. However, to infer large networks by the evolutionary algorithm, it is necessary to address two important issues: premature convergence and high computational cost. To tackle the former problem and to enhance the performance of traditional evolutionary algorithms, it is advisable to use parallel model evolutionary algorithms. To overcome the latter and to speed up the computation, it is advocated to adopt the mechanism of cloud computing as a promising solution: most popular is the method of MapReduce programming model, a fault-tolerant framework to implement parallel algorithms for inferring large gene networks. Results This work presents a practical framework to infer large gene networks, by developing and parallelizing a hybrid GA-PSO optimization method. Our parallel method is extended to work with the Hadoop MapReduce programming model and is executed in different cloud computing environments. To evaluate the proposed approach, we use a well-known open-source software GeneNetWeaver to create several yeast S. cerevisiae sub-networks and use them to produce gene profiles. Experiments have been conducted and the results have been analyzed. They show that our parallel approach can be successfully used to infer networks with desired behaviors and the computation time can be largely reduced. Conclusions Parallel population-based algorithms can effectively determine network parameters and they perform better than the widely-used sequential algorithms in gene network inference. These parallel algorithms can be distributed to the cloud computing environment to speed up the computation. By coupling the parallel model population-based optimization method and the parallel

  20. Evaluation of a Pair-Wise Conflict Detection and Resolution Algorithm in a Multiple Aircraft Scenario

    NASA Technical Reports Server (NTRS)

    Carreno, Victor A.

    2002-01-01

    The KB3D algorithm is a pairwise conflict detection and resolution (CD&R) algorithm. It detects and generates trajectory vectoring for an aircraft which has been predicted to be in an airspace minima violation within a given look-ahead time. It has been proven, using mechanized theorem proving techniques, that for a pair of aircraft, KB3D produces at least one vectoring solution and that all solutions produced are correct. Although solutions produced by the algorithm are mathematically correct, they might not be physically executable by an aircraft or might not solve multiple aircraft conflicts. This paper describes a simple solution selection method which assesses all solutions generated by KB3D and determines the solution to be executed. The solution selection method and KB3D are evaluated using a simulation in which N aircraft fly in a free-flight environment and each aircraft in the simulation uses KB3D to maintain separation. Specifically, the solution selection method filters KB3D solutions which are procedurally undesirable or physically not executable and uses a predetermined criteria for selection.

  1. Average correlation clustering algorithm (ACCA) for grouping of co-regulated genes with similar pattern of variation in their expression values.

    PubMed

    Bhattacharya, Anindya; De, Rajat K

    2010-08-01

    Distance based clustering algorithms can group genes that show similar expression values under multiple experimental conditions. They are unable to identify a group of genes that have similar pattern of variation in their expression values. Previously we developed an algorithm called divisive correlation clustering algorithm (DCCA) to tackle this situation, which is based on the concept of correlation clustering. But this algorithm may also fail for certain cases. In order to overcome these situations, we propose a new clustering algorithm, called average correlation clustering algorithm (ACCA), which is able to produce better clustering solution than that produced by some others. ACCA is able to find groups of genes having more common transcription factors and similar pattern of variation in their expression values. Moreover, ACCA is more efficient than DCCA with respect to the time of execution. Like DCCA, we use the concept of correlation clustering concept introduced by Bansal et al. ACCA uses the correlation matrix in such a way that all genes in a cluster have the highest average correlation values with the genes in that cluster. We have applied ACCA and some well-known conventional methods including DCCA to two artificial and nine gene expression datasets, and compared the performance of the algorithms. The clustering results of ACCA are found to be more significantly relevant to the biological annotations than those of the other methods. Analysis of the results show the superiority of ACCA over some others in determining a group of genes having more common transcription factors and with similar pattern of variation in their expression profiles. Availability of the software: The software has been developed using C and Visual Basic languages, and can be executed on the Microsoft Windows platforms. The software may be downloaded as a zip file from http://www.isical.ac.in/~rajat. Then it needs to be installed. Two word files (included in the zip file) need to

  2. The Texas Medication Algorithm Project (TMAP) schizophrenia algorithms.

    PubMed

    Miller, A L; Chiles, J A; Chiles, J K; Crismon, M L; Rush, A J; Shon, S P

    1999-10-01

    In the Texas Medication Algorithm Project (TMAP), detailed guidelines for medication management of schizophrenia and related disorders, bipolar disorders, and major depressive disorders have been developed and implemented. This article describes the algorithms developed for medication treatment of schizophrenia and related disorders. The guidelines recommend a sequence of medications and discuss dosing, duration, and switch-over tactics. They also specify response criteria at each stage of the algorithm for both positive and negative symptoms. The rationale and evidence for each aspect of the algorithms are presented.

  3. The Results of a Simulator Study to Determine the Effects on Pilot Performance of Two Different Motion Cueing Algorithms and Various Delays, Compensated and Uncompensated

    NASA Technical Reports Server (NTRS)

    Guo, Li-Wen; Cardullo, Frank M.; Telban, Robert J.; Houck, Jacob A.; Kelly, Lon C.

    2003-01-01

    A study was conducted employing the Visual Motion Simulator (VMS) at the NASA Langley Research Center, Hampton, Virginia. This study compared two motion cueing algorithms, the NASA adaptive algorithm and a new optimal control based algorithm. Also, the study included the effects of transport delays and the compensation thereof. The delay compensation algorithm employed is one developed by Richard McFarland at NASA Ames Research Center. This paper reports on the analyses of the results of analyzing the experimental data collected from preliminary simulation tests. This series of tests was conducted to evaluate the protocols and the methodology of data analysis in preparation for more comprehensive tests which will be conducted during the spring of 2003. Therefore only three pilots were used. Nevertheless some useful results were obtained. The experimental conditions involved three maneuvers; a straight-in approach with a rotating wind vector, an offset approach with turbulence and gust, and a takeoff with and without an engine failure shortly after liftoff. For each of the maneuvers the two motion conditions were combined with four delay conditions (0, 50, 100 & 200ms), with and without compensation.

  4. Improved pulse laser ranging algorithm based on high speed sampling

    NASA Astrophysics Data System (ADS)

    Gao, Xuan-yi; Qian, Rui-hai; Zhang, Yan-mei; Li, Huan; Guo, Hai-chao; He, Shi-jie; Guo, Xiao-kang

    2016-10-01

    Narrow pulse laser ranging achieves long-range target detection using laser pulse with low divergent beams. Pulse laser ranging is widely used in military, industrial, civil, engineering and transportation field. In this paper, an improved narrow pulse laser ranging algorithm is studied based on the high speed sampling. Firstly, theoretical simulation models have been built and analyzed including the laser emission and pulse laser ranging algorithm. An improved pulse ranging algorithm is developed. This new algorithm combines the matched filter algorithm and the constant fraction discrimination (CFD) algorithm. After the algorithm simulation, a laser ranging hardware system is set up to implement the improved algorithm. The laser ranging hardware system includes a laser diode, a laser detector and a high sample rate data logging circuit. Subsequently, using Verilog HDL language, the improved algorithm is implemented in the FPGA chip based on fusion of the matched filter algorithm and the CFD algorithm. Finally, the laser ranging experiment is carried out to test the improved algorithm ranging performance comparing to the matched filter algorithm and the CFD algorithm using the laser ranging hardware system. The test analysis result demonstrates that the laser ranging hardware system realized the high speed processing and high speed sampling data transmission. The algorithm analysis result presents that the improved algorithm achieves 0.3m distance ranging precision. The improved algorithm analysis result meets the expected effect, which is consistent with the theoretical simulation.

  5. Honey Bees Inspired Optimization Method: The Bees Algorithm.

    PubMed

    Yuce, Baris; Packianather, Michael S; Mastrocinque, Ernesto; Pham, Duc Truong; Lambiase, Alfredo

    2013-11-06

    Optimization algorithms are search methods where the goal is to find an optimal solution to a problem, in order to satisfy one or more objective functions, possibly subject to a set of constraints. Studies of social animals and social insects have resulted in a number of computational models of swarm intelligence. Within these swarms their collective behavior is usually very complex. The collective behavior of a swarm of social organisms emerges from the behaviors of the individuals of that swarm. Researchers have developed computational optimization methods based on biology such as Genetic Algorithms, Particle Swarm Optimization, and Ant Colony. The aim of this paper is to describe an optimization algorithm called the Bees Algorithm, inspired from the natural foraging behavior of honey bees, to find the optimal solution. The algorithm performs both an exploitative neighborhood search combined with random explorative search. In this paper, after an explanation of the natural foraging behavior of honey bees, the basic Bees Algorithm and its improved versions are described and are implemented in order to optimize several benchmark functions, and the results are compared with those obtained with different optimization algorithms. The results show that the Bees Algorithm offering some advantage over other optimization methods according to the nature of the problem.

  6. A Beacon Transmission Power Control Algorithm Based on Wireless Channel Load Forecasting in VANETs.

    PubMed

    Mo, Yuanfu; Yu, Dexin; Song, Jun; Zheng, Kun; Guo, Yajuan

    2015-01-01

    In a vehicular ad hoc network (VANET), the periodic exchange of single-hop status information broadcasts (beacon frames) produces channel loading, which causes channel congestion and induces information conflict problems. To guarantee fairness in beacon transmissions from each node and maximum network connectivity, adjustment of the beacon transmission power is an effective method for reducing and preventing channel congestion. In this study, the primary factors that influence wireless channel loading are selected to construct the KF-BCLF, which is a channel load forecasting algorithm based on a recursive Kalman filter and employs multiple regression equation. By pre-adjusting the transmission power based on the forecasted channel load, the channel load was kept within a predefined range; therefore, channel congestion was prevented. Based on this method, the CLF-BTPC, which is a transmission power control algorithm, is proposed. To verify KF-BCLF algorithm, a traffic survey method that involved the collection of floating car data along a major traffic road in Changchun City is employed. By comparing this forecast with the measured channel loads, the proposed KF-BCLF algorithm was proven to be effective. In addition, the CLF-BTPC algorithm is verified by simulating a section of eight-lane highway and a signal-controlled urban intersection. The results of the two verification process indicate that this distributed CLF-BTPC algorithm can effectively control channel load, prevent channel congestion, and enhance the stability and robustness of wireless beacon transmission in a vehicular network.

  7. A Beacon Transmission Power Control Algorithm Based on Wireless Channel Load Forecasting in VANETs

    PubMed Central

    Mo, Yuanfu; Yu, Dexin; Song, Jun; Zheng, Kun; Guo, Yajuan

    2015-01-01

    In a vehicular ad hoc network (VANET), the periodic exchange of single-hop status information broadcasts (beacon frames) produces channel loading, which causes channel congestion and induces information conflict problems. To guarantee fairness in beacon transmissions from each node and maximum network connectivity, adjustment of the beacon transmission power is an effective method for reducing and preventing channel congestion. In this study, the primary factors that influence wireless channel loading are selected to construct the KF-BCLF, which is a channel load forecasting algorithm based on a recursive Kalman filter and employs multiple regression equation. By pre-adjusting the transmission power based on the forecasted channel load, the channel load was kept within a predefined range; therefore, channel congestion was prevented. Based on this method, the CLF-BTPC, which is a transmission power control algorithm, is proposed. To verify KF-BCLF algorithm, a traffic survey method that involved the collection of floating car data along a major traffic road in Changchun City is employed. By comparing this forecast with the measured channel loads, the proposed KF-BCLF algorithm was proven to be effective. In addition, the CLF-BTPC algorithm is verified by simulating a section of eight-lane highway and a signal-controlled urban intersection. The results of the two verification process indicate that this distributed CLF-BTPC algorithm can effectively control channel load, prevent channel congestion, and enhance the stability and robustness of wireless beacon transmission in a vehicular network. PMID:26571042

  8. Contact solution algorithms

    NASA Technical Reports Server (NTRS)

    Tielking, John T.

    1989-01-01

    Two algorithms for obtaining static contact solutions are described in this presentation. Although they were derived for contact problems involving specific structures (a tire and a solid rubber cylinder), they are sufficiently general to be applied to other shell-of-revolution and solid-body contact problems. The shell-of-revolution contact algorithm is a method of obtaining a point load influence coefficient matrix for the portion of shell surface that is expected to carry a contact load. If the shell is sufficiently linear with respect to contact loading, a single influence coefficient matrix can be used to obtain a good approximation of the contact pressure distribution. Otherwise, the matrix will be updated to reflect nonlinear load-deflection behavior. The solid-body contact algorithm utilizes a Lagrange multiplier to include the contact constraint in a potential energy functional. The solution is found by applying the principle of minimum potential energy. The Lagrange multiplier is identified as the contact load resultant for a specific deflection. At present, only frictionless contact solutions have been obtained with these algorithms. A sliding tread element has been developed to calculate friction shear force in the contact region of the rolling shell-of-revolution tire model.

  9. Superiorized algorithm for reconstruction of CT images from sparse-view and limited-angle polyenergetic data

    NASA Astrophysics Data System (ADS)

    Humphries, T.; Winn, J.; Faridani, A.

    2017-08-01

    Recent work in CT image reconstruction has seen increasing interest in the use of total variation (TV) and related penalties to regularize problems involving reconstruction from undersampled or incomplete data. Superiorization is a recently proposed heuristic which provides an automatic procedure to ‘superiorize’ an iterative image reconstruction algorithm with respect to a chosen objective function, such as TV. Under certain conditions, the superiorized algorithm is guaranteed to find a solution that is as satisfactory as any found by the original algorithm with respect to satisfying the constraints of the problem; this solution is also expected to be superior with respect to the chosen objective. Most work on superiorization has used reconstruction algorithms which assume a linear measurement model, which in the case of CT corresponds to data generated from a monoenergetic x-ray beam. Many CT systems generate x-rays from a polyenergetic spectrum, however, in which the measured data represent an integral of object attenuation over all energies in the spectrum. This inconsistency with the linear model produces the well-known beam hardening artifacts, which impair analysis of CT images. In this work we superiorize an iterative algorithm for reconstruction from polyenergetic data, using both TV and an anisotropic TV (ATV) penalty. We apply the superiorized algorithm in numerical phantom experiments modeling both sparse-view and limited-angle scenarios. In our experiments, the superiorized algorithm successfully finds solutions which are as constraints-compatible as those found by the original algorithm, with significantly reduced TV and ATV values. The superiorized algorithm thus produces images with greatly reduced sparse-view and limited angle artifacts, which are also largely free of the beam hardening artifacts that would be present if a superiorized version of a monoenergetic algorithm were used.

  10. On Super-Resolution and the MUSIC Algorithm,

    DTIC Science & Technology

    1985-05-01

    SUPER-RESOLUTION AND THE MUSIC ALGORITHM AUTHOR: G D de Villiers DATE: May 1985 SUMMARY Simulation results for phased array signal processing using...the MUSIC algorithm are presented. The model used is more realistic than previous ones and it gives an indication as to how the algorithm would perform...resolution ON SUPER-RESOLUTION AND THE MUSIC ALGORITHM 1. INTRODUCTION At present there is a considerable amount of interest in "high-resolution" b

  11. Daylighting simulation: methods, algorithms, and resources

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Carroll, William L.

    This document presents work conducted as part of Subtask C, ''Daylighting Design Tools'', Subgroup C2, ''New Daylight Algorithms'', of the IEA SHC Task 21 and the ECBCS Program Annex 29 ''Daylight in Buildings''. The search for and collection of daylighting analysis methods and algorithms led to two important observations. First, there is a wide range of needs for different types of methods to produce a complete analysis tool. These include: Geometry; Light modeling; Characterization of the natural illumination resource; Materials and components properties, representations; and Usability issues (interfaces, interoperability, representation of analysis results, etc). Second, very advantageously, there have beenmore » rapid advances in many basic methods in these areas, due to other forces. They are in part driven by: The commercial computer graphics community (commerce, entertainment); The lighting industry; Architectural rendering and visualization for projects; and Academia: Course materials, research. This has led to a very rich set of information resources that have direct applicability to the small daylighting analysis community. Furthermore, much of this information is in fact available online. Because much of the information about methods and algorithms is now online, an innovative reporting strategy was used: the core formats are electronic, and used to produce a printed form only secondarily. The electronic forms include both online WWW pages and a downloadable .PDF file with the same appearance and content. Both electronic forms include live primary and indirect links to actual information sources on the WWW. In most cases, little additional commentary is provided regarding the information links or citations that are provided. This in turn allows the report to be very concise. The links are expected speak for themselves. The report consists of only about 10+ pages, with about 100+ primary links, but with potentially thousands of indirect links. For

  12. Study on distributed generation algorithm of variable precision concept lattice based on ontology heterogeneous database

    NASA Astrophysics Data System (ADS)

    WANG, Qingrong; ZHU, Changfeng

    2017-06-01

    Integration of distributed heterogeneous data sources is the key issues under the big data applications. In this paper the strategy of variable precision is introduced to the concept lattice, and the one-to-one mapping mode of variable precision concept lattice and ontology concept lattice is constructed to produce the local ontology by constructing the variable precision concept lattice for each subsystem, and the distributed generation algorithm of variable precision concept lattice based on ontology heterogeneous database is proposed to draw support from the special relationship between concept lattice and ontology construction. Finally, based on the standard of main concept lattice of the existing heterogeneous database generated, a case study has been carried out in order to testify the feasibility and validity of this algorithm, and the differences between the main concept lattice and the standard concept lattice are compared. Analysis results show that this algorithm above-mentioned can automatically process the construction process of distributed concept lattice under the heterogeneous data sources.

  13. A comparison of independent component analysis algorithms and measures to discriminate between EEG and artifact components.

    PubMed

    Dharmaprani, Dhani; Nguyen, Hoang K; Lewis, Trent W; DeLosAngeles, Dylan; Willoughby, John O; Pope, Kenneth J

    2016-08-01

    Independent Component Analysis (ICA) is a powerful statistical tool capable of separating multivariate scalp electrical signals into their additive independent or source components, specifically EEG or electroencephalogram and artifacts. Although ICA is a widely accepted EEG signal processing technique, classification of the recovered independent components (ICs) is still flawed, as current practice still requires subjective human decisions. Here we build on the results from Fitzgibbon et al. [1] to compare three measures and three ICA algorithms. Using EEG data acquired during neuromuscular paralysis, we tested the ability of the measures (spectral slope, peripherality and spatial smoothness) and algorithms (FastICA, Infomax and JADE) to identify components containing EMG. Spatial smoothness showed differentiation between paralysis and pre-paralysis ICs comparable to spectral slope, whereas peripherality showed less differentiation. A combination of the measures showed better differentiation than any measure alone. Furthermore, FastICA provided the best discrimination between muscle-free and muscle-contaminated recordings in the shortest time, suggesting it may be the most suited to EEG applications of the considered algorithms. Spatial smoothness results suggest that a significant number of ICs are mixed, i.e. contain signals from more than one biological source, and so the development of an ICA algorithm that is optimised to produce ICs that are easily classifiable is warranted.

  14. Survey of PRT Vehicle Management Algorithms

    DOT National Transportation Integrated Search

    1974-01-01

    The document summarizes the results of a literature survey of state of the art vehicle management algorithms applicable to Personal Rapid Transit Systems(PRT). The surveyed vehicle management algorithms are organized into a set of five major componen...

  15. The development of a scalable parallel 3-D CFD algorithm for turbomachinery. M.S. Thesis Final Report

    NASA Technical Reports Server (NTRS)

    Luke, Edward Allen

    1993-01-01

    Two algorithms capable of computing a transonic 3-D inviscid flow field about rotating machines are considered for parallel implementation. During the study of these algorithms, a significant new method of measuring the performance of parallel algorithms is developed. The theory that supports this new method creates an empirical definition of scalable parallel algorithms that is used to produce quantifiable evidence that a scalable parallel application was developed. The implementation of the parallel application and an automated domain decomposition tool are also discussed.

  16. A joint equalization algorithm in high speed communication systems

    NASA Astrophysics Data System (ADS)

    Hao, Xin; Lin, Changxing; Wang, Zhaohui; Cheng, Binbin; Deng, Xianjin

    2018-02-01

    This paper presents a joint equalization algorithm in high speed communication systems. This algorithm takes the advantages of traditional equalization algorithms to use pre-equalization and post-equalization. The pre-equalization algorithm takes the advantage of CMA algorithm, which is not sensitive to the frequency offset. Pre-equalization is located before the carrier recovery loop in order to make the carrier recovery loop a better performance and overcome most of the frequency offset. The post-equalization takes the advantage of MMA algorithm in order to overcome the residual frequency offset. This paper analyzes the advantages and disadvantages of several equalization algorithms in the first place, and then simulates the proposed joint equalization algorithm in Matlab platform. The simulation results shows the constellation diagrams and the bit error rate curve, both these results show that the proposed joint equalization algorithm is better than the traditional algorithms. The residual frequency offset is shown directly in the constellation diagrams. When SNR is 14dB, the bit error rate of the simulated system with the proposed joint equalization algorithm is 103 times better than CMA algorithm, 77 times better than MMA equalization, and 9 times better than CMA-MMA equalization.

  17. Algorithm Visualization System for Teaching Spatial Data Algorithms

    ERIC Educational Resources Information Center

    Nikander, Jussi; Helminen, Juha; Korhonen, Ari

    2010-01-01

    TRAKLA2 is a web-based learning environment for data structures and algorithms. The system delivers automatically assessed algorithm simulation exercises that are solved using a graphical user interface. In this work, we introduce a novel learning environment for spatial data algorithms, SDA-TRAKLA2, which has been implemented on top of the…

  18. Impact of Gram stain results on initial treatment selection in patients with ventilator-associated pneumonia: a retrospective analysis of two treatment algorithms.

    PubMed

    Yoshimura, Jumpei; Kinoshita, Takahiro; Yamakawa, Kazuma; Matsushima, Asako; Nakamoto, Naoki; Hamasaki, Toshimitsu; Fujimi, Satoshi

    2017-06-19

    Ventilator-associated pneumonia (VAP) is a common and serious problem in intensive care units (ICUs). Several studies have suggested that the Gram stain of endotracheal aspirates is a useful method for accurately diagnosing VAP. However, the usefulness of the Gram stain in predicting which microorganisms cause VAP has not been established. The purpose of this study was to evaluate whether a Gram stain of endotracheal aspirates could be used to determine appropriate initial antimicrobial therapy for VAP. Data on consecutive episodes of microbiologically confirmed VAP were collected from February 2013 to February 2016 in the ICU of a tertiary care hospital in Japan. We constructed two hypothetical empirical antimicrobial treatment algorithms for VAP: a guidelines-based algorithm (GLBA) based on the recommendations of the American Thoracic Society-Infectious Diseases Society of America (ATS-IDSA) guidelines and a Gram stain-based algorithm (GSBA) which limited the choice of initial antimicrobials according to the results of bedside Gram stains. The GLBA and the GSBA were retrospectively reviewed for each VAP episode. The initial coverage rates and the selection of broad-spectrum antimicrobial agents were compared between the two algorithms. During the study period, 219 suspected VAP episodes were observed and 131 episodes were assessed for analysis. Appropriate antimicrobial coverage rates were not significantly different between the two algorithms (GLBA 95.4% versus GSBA 92.4%; p = 0.134). The number of episodes for which antimethicillin-resistant Staphylococcus aureus agents were selected as an initial treatment was larger in the GLBA than in the GSBA (71.0% versus 31.3%; p < 0.001), as were the number of episodes for which antipseudomonal agents were recommended as an initial treatment (70.2% versus 51.9%; p < 0.001). Antimicrobial treatment based on Gram stain results may restrict the administration of broad-spectrum antimicrobial agents without

  19. A Heuristics Approach for Classroom Scheduling Using Genetic Algorithm Technique

    NASA Astrophysics Data System (ADS)

    Ahmad, Izah R.; Sufahani, Suliadi; Ali, Maselan; Razali, Siti N. A. M.

    2018-04-01

    Reshuffling and arranging classroom based on the capacity of the audience, complete facilities, lecturing time and many more may lead to a complexity of classroom scheduling. While trying to enhance the productivity in classroom planning, this paper proposes a heuristic approach for timetabling optimization. A new algorithm was produced to take care of the timetabling problem in a university. The proposed of heuristics approach will prompt a superior utilization of the accessible classroom space for a given time table of courses at the university. Genetic Algorithm through Java programming languages were used in this study and aims at reducing the conflicts and optimizes the fitness. The algorithm considered the quantity of students in each class, class time, class size, time accessibility in each class and lecturer who in charge of the classes.

  20. Fast algorithm for bilinear transforms in optics

    NASA Astrophysics Data System (ADS)

    Ostrovsky, Andrey S.; Martinez-Niconoff, Gabriel C.; Ramos Romero, Obdulio; Cortes, Liliana

    2000-10-01

    The fast algorithm for calculating the bilinear transform in the optical system is proposed. This algorithm is based on the coherent-mode representation of the cross-spectral density function of the illumination. The algorithm is computationally efficient when the illumination is partially coherent. Numerical examples are studied and compared with the theoretical results.