Science.gov

Sample records for algorithm experiments show

  1. Casimir experiments showing saturation effects

    SciTech Connect

    Sernelius, Bo E.

    2009-10-15

    We address several different Casimir experiments where theory and experiment disagree. First out is the classical Casimir force measurement between two metal half spaces; here both in the form of the torsion pendulum experiment by Lamoreaux and in the form of the Casimir pressure measurement between a gold sphere and a gold plate as performed by Decca et al.; theory predicts a large negative thermal correction, absent in the high precision experiments. The third experiment is the measurement of the Casimir force between a metal plate and a laser irradiated semiconductor membrane as performed by Chen et al.; the change in force with laser intensity is larger than predicted by theory. The fourth experiment is the measurement of the Casimir force between an atom and a wall in the form of the measurement by Obrecht et al. of the change in oscillation frequency of a {sup 87}Rb Bose-Einstein condensate trapped to a fused silica wall; the change is smaller than predicted by theory. We show that saturation effects can explain the discrepancies between theory and experiment observed in all these cases.

  2. SAGE II inversion algorithm. [Stratospheric Aerosol and Gas Experiment

    NASA Technical Reports Server (NTRS)

    Chu, W. P.; Mccormick, M. P.; Lenoble, J.; Brogniez, C.; Pruvost, P.

    1989-01-01

    The operational Stratospheric Aerosol and Gas Experiment II multichannel data inversion algorithm is described. Aerosol and ozone retrievals obtained with the algorithm are discussed. The algorithm is compared to an independently developed algorithm (Lenoble, 1989), showing that the inverted aerosol and ozone profiles from the two algorithms are similar within their respective uncertainties.

  3. Firmware algorithms for PINGU experiment

    NASA Astrophysics Data System (ADS)

    Pankova, Daria; Anderson, Tyler; IceCube Collaboration

    2017-01-01

    PINGU is a future low energy extension for the IceCube experiment. It will be implemented as several additional closer positioned stings of digital optical modules (DOMs) inside the main detector volume. PINGU would be able to register neutrinos with energies as low as few GeV. One of the proposed designs for the new PINGU DOMs is an updated version of IceCube DOMs with newer electronic components, particularly a better more modern FPGA. With those improvements it is desirable to run some waveform feature extraction directly on the DOM, thus decreasing amount of data sent over the detector's bandwidth-limited cable. In order to use the existing feature extraction package for this purpose the signal waveform needs to be prepared by subtracting of a variable baseline from it. The baseline shape is dependant mostly on the environment temperature, which causes the long term drift of the signal, and the induction used in signal readout electronics, which modifies the signal shape. Algorithms have been selected to counter those baseline variances, modeled and partly implemented in FPGA fabric. The simulation shows good agreement between initial signal and the ``corrected'' version.

  4. Experiments showing dynamics of materials interfaces

    SciTech Connect

    Benjamin, R.F.

    1997-02-01

    The discipline of materials science and engineering often involves understanding and controlling properties of interfaces. The authors address the challenge of educating students about properties of interfaces, particularly dynamic properties and effects of unstable interfaces. A series of simple, inexpensive, hands-on activities about fluid interfaces provides students with a testbed to develop intuition about interface dynamics. The experiments highlight the essential role of initial interfacial perturbations in determining the dynamic response of the interface. The experiments produce dramatic, unexpected effects when initial perturbations are controlled and inhibited. These activities help students to develop insight about unstable interfaces that can be applied to analogous problems in materials science and engineering. The lessons examine ``Rayleigh-Taylor instability,`` an interfacial instability that occurs when a higher-density fluid is above a lower-density fluid.

  5. Retrieval Algorithms for the Halogen Occultation Experiment

    NASA Technical Reports Server (NTRS)

    Thompson, Robert E.; Gordley, Larry L.

    2009-01-01

    The Halogen Occultation Experiment (HALOE) on the Upper Atmosphere Research Satellite (UARS) provided high quality measurements of key middle atmosphere constituents, aerosol characteristics, and temperature for 14 years (1991-2005). This report is an outline of the Level 2 retrieval algorithms, and it also describes the great care that was taken in characterizing the instrument prior to launch and throughout its mission life. It represents an historical record of the techniques used to analyze the data and of the steps that must be considered for the development of a similar experiment for future satellite missions.

  6. Algorithmic Animation in Education--Review of Academic Experience

    ERIC Educational Resources Information Center

    Esponda-Arguero, Margarita

    2008-01-01

    This article is a review of the pedagogical experience obtained with systems for algorithmic animation. Algorithms consist of a sequence of operations whose effect on data structures can be visualized using a computer. Students learn algorithms by stepping the animation through the different individual operations, possibly reversing their effect.…

  7. IDEAL: Images Across Domains, Experiments, Algorithms and Learning

    NASA Astrophysics Data System (ADS)

    Ushizima, Daniela M.; Bale, Hrishikesh A.; Bethel, E. Wes; Ercius, Peter; Helms, Brett A.; Krishnan, Harinarayan; Grinberg, Lea T.; Haranczyk, Maciej; Macdowell, Alastair A.; Odziomek, Katarzyna; Parkinson, Dilworth Y.; Perciano, Talita; Ritchie, Robert O.; Yang, Chao

    2016-11-01

    Research across science domains is increasingly reliant on image-centric data. Software tools are in high demand to uncover relevant, but hidden, information in digital images, such as those coming from faster next generation high-throughput imaging platforms. The challenge is to analyze the data torrent generated by the advanced instruments efficiently, and provide insights such as measurements for decision-making. In this paper, we overview work performed by an interdisciplinary team of computational and materials scientists, aimed at designing software applications and coordinating research efforts connecting (1) emerging algorithms for dealing with large and complex datasets; (2) data analysis methods with emphasis in pattern recognition and machine learning; and (3) advances in evolving computer architectures. Engineering tools around these efforts accelerate the analyses of image-based recordings, improve reusability and reproducibility, scale scientific procedures by reducing time between experiments, increase efficiency, and open opportunities for more users of the imaging facilities. This paper describes our algorithms and software tools, showing results across image scales, demonstrating how our framework plays a role in improving image understanding for quality control of existent materials and discovery of new compounds.

  8. Experiments on Supervised Learning Algorithms for Text Categorization

    NASA Technical Reports Server (NTRS)

    Namburu, Setu Madhavi; Tu, Haiying; Luo, Jianhui; Pattipati, Krishna R.

    2005-01-01

    Modern information society is facing the challenge of handling massive volume of online documents, news, intelligence reports, and so on. How to use the information accurately and in a timely manner becomes a major concern in many areas. While the general information may also include images and voice, we focus on the categorization of text data in this paper. We provide a brief overview of the information processing flow for text categorization, and discuss two supervised learning algorithms, viz., support vector machines (SVM) and partial least squares (PLS), which have been successfully applied in other domains, e.g., fault diagnosis [9]. While SVM has been well explored for binary classification and was reported as an efficient algorithm for text categorization, PLS has not yet been applied to text categorization. Our experiments are conducted on three data sets: Reuter's- 21578 dataset about corporate mergers and data acquisitions (ACQ), WebKB and the 20-Newsgroups. Results show that the performance of PLS is comparable to SVM in text categorization. A major drawback of SVM for multi-class categorization is that it requires a voting scheme based on the results of pair-wise classification. PLS does not have this drawback and could be a better candidate for multi-class text categorization.

  9. Experience with a Genetic Algorithm Implemented on a Multiprocessor Computer

    NASA Technical Reports Server (NTRS)

    Plassman, Gerald E.; Sobieszczanski-Sobieski, Jaroslaw

    2000-01-01

    Numerical experiments were conducted to find out the extent to which a Genetic Algorithm (GA) may benefit from a multiprocessor implementation, considering, on one hand, that analyses of individual designs in a population are independent of each other so that they may be executed concurrently on separate processors, and, on the other hand, that there are some operations in a GA that cannot be so distributed. The algorithm experimented with was based on a gaussian distribution rather than bit exchange in the GA reproductive mechanism, and the test case was a hub frame structure of up to 1080 design variables. The experimentation engaging up to 128 processors confirmed expectations of radical elapsed time reductions comparing to a conventional single processor implementation. It also demonstrated that the time spent in the non-distributable parts of the algorithm and the attendant cross-processor communication may have a very detrimental effect on the efficient utilization of the multiprocessor machine and on the number of processors that can be used effectively in a concurrent manner. Three techniques were devised and tested to mitigate that effect, resulting in efficiency increasing to exceed 99 percent.

  10. Development of clustering algorithms for Compressed Baryonic Matter experiment

    NASA Astrophysics Data System (ADS)

    Kozlov, G. E.; Ivanov, V. V.; Lebedev, A. A.; Vassiliev, Yu. O.

    2015-05-01

    A clustering problem for the coordinate detectors in the Compressed Baryonic Matter (CBM) experiment is discussed. Because of the high interaction rate and huge datasets to be dealt with, clustering algorithms are required to be fast and efficient and capable of processing events with high track multiplicity. At present there are two different approaches to the problem. In the first one each fired pad bears information about its charge, while in the second one a pad can or cannot be fired, thus rendering the separation of overlapping clusters a difficult task. To deal with the latter, two different clustering algorithms were developed, integrated into the CBMROOT software environment, and tested with various types of simulated events. Both of them are found to be highly efficient and accurate.

  11. Data inversion algorithm development for the hologen occultation experiment

    NASA Technical Reports Server (NTRS)

    Gordley, Larry L.; Mlynczak, Martin G.

    1986-01-01

    The successful retrieval of atmospheric parameters from radiometric measurement requires not only the ability to do ideal radiometric calculations, but also a detailed understanding of instrument characteristics. Therefore a considerable amount of time was spent in instrument characterization in the form of test data analysis and mathematical formulation. Analyses of solar-to-reference interference (electrical cross-talk), detector nonuniformity, instrument balance error, electronic filter time-constants and noise character were conducted. A second area of effort was the development of techniques for the ideal radiometric calculations required for the Halogen Occultation Experiment (HALOE) data reduction. The computer code for these calculations must be extremely complex and fast. A scheme for meeting these requirements was defined and the algorithms needed form implementation are currently under development. A third area of work included consulting on the implementation of the Emissivity Growth Approximation (EGA) method of absorption calculation into a HALOE broadband radiometer channel retrieval algorithm.

  12. A numerical algorithm for endochronic plasticity and comparison with experiment

    NASA Technical Reports Server (NTRS)

    Valanis, K. C.; Fan, J.

    1985-01-01

    A numerical algorithm based on the finite element method of analysis of the boundary value problem in a continuum is presented, in the case where the plastic response of the material is given in the context of endochronic plasticity. The relevant constitutive equation is expressed in incremental form and plastic effects are accounted for by the method of an induced pseudo-force in the matrix equations. The results of the analysis are compared with observed values in the case of a plate with two symmetric notches and loaded longitudinally in its own plane. The agreement between theory and experiment is excellent.

  13. Experience with CANDID: Comparison algorithm for navigating digital image databases

    SciTech Connect

    Kelly, P.; Cannon, M.

    1994-10-01

    This paper presents results from the authors experience with CANDID (Comparison Algorithm for Navigating Digital Image Databases), which was designed to facilitate image retrieval by content using a query-by-example methodology. A global signature describing the texture, shape, or color content is first computed for every image stored in a database, and a normalized similarity measure between probability density functions of feature vectors is used to match signatures. This method can be used to retrieve images from a database that are similar to a user-provided example image. Results for three test applications are included.

  14. Experiments with a Parallel Multi-Objective Evolutionary Algorithm for Scheduling

    NASA Technical Reports Server (NTRS)

    Brown, Matthew; Johnston, Mark D.

    2013-01-01

    Evolutionary multi-objective algorithms have great potential for scheduling in those situations where tradeoffs among competing objectives represent a key requirement. One challenge, however, is runtime performance, as a consequence of evolving not just a single schedule, but an entire population, while attempting to sample the Pareto frontier as accurately and uniformly as possible. The growing availability of multi-core processors in end user workstations, and even laptops, has raised the question of the extent to which such hardware can be used to speed up evolutionary algorithms. In this paper we report on early experiments in parallelizing a Generalized Differential Evolution (GDE) algorithm for scheduling long-range activities on NASA's Deep Space Network. Initial results show that significant speedups can be achieved, but that performance does not necessarily improve as more cores are utilized. We describe our preliminary results and some initial suggestions from parallelizing the GDE algorithm. Directions for future work are outlined.

  15. Pile-Up Discrimination Algorithms for the HOLMES Experiment

    NASA Astrophysics Data System (ADS)

    Ferri, E.; Alpert, B.; Bennett, D.; Faverzani, M.; Fowler, J.; Giachero, A.; Hays-Wehle, J.; Maino, M.; Nucciotti, A.; Puiu, A.; Ullom, J.

    2016-07-01

    The HOLMES experiment is a new large-scale experiment for the electron neutrino mass determination by means of the electron capture decay of ^{163}Ho. In such an experiment, random coincidence events are one of the main sources of background which impair the ability to identify the effect of a non-vanishing neutrino mass. In order to resolve these spurious events, detectors characterized by a fast response are needed as well as pile-up recognition algorithms. For that reason, we have developed a code for testing the discrimination efficiency of various algorithms in recognizing pile up events in dependence of the time separation between two pulses. The tests are performed on simulated realistic TES signals and noise. Indeed, the pulse profile is obtained by solving the two coupled differential equations which describe the response of the TES according to the Irwin-Hilton model. To these pulses, a noise waveform which takes into account all the noise sources regularly present in a real TES is added. The amplitude of the generated pulses is distributed as the ^{163}Ho calorimetric spectrum. Furthermore, the rise time of these pulses has been chosen taking into account the constraints given by both the bandwidth of the microwave multiplexing read out with a flux ramp demodulation and the bandwidth of the ADC boards currently available for ROACH2. Among the different rejection techniques evaluated, the Wiener Filter technique, a digital filter to gain time resolution, has shown an excellent pile-up rejection efficiency. The obtained time resolution closely matches the baseline specifications of the HOLMES experiment. We report here a description of our simulation code and a comparison of the different rejection techniques.

  16. Searching the UVSP database and a list of experiments showing mass motions

    NASA Technical Reports Server (NTRS)

    Thompson, William

    1986-01-01

    Since the Solar Maximum Mission (SMM) satellite was launched, a large database has been built up of experiments using the Ultraviolet Spectrometer and Polarimeter (UVSP) instrument. Access to this database can be gained through the SMM Vax 750 computer at Goddard Space Flight Center. One useful way to do this is with a program called USEARCH. This program allows one to make a listing of different types of UVSP experiments. It is evident that this program is useful to those who would wish to make use of UVSP data, but who don't know what data is available. Therefore it was decided to include a short description of how to make use of the USEARCH program. Also described, but not included, is a listing of all UVSP experiments showing mass motions in prominences and filaments. This list was made with the aid of the USEARCH program.

  17. Contactless experiments on individual DNA molecules show no evidence for molecular wire behavior

    PubMed Central

    Gómez-Navarro, C.; Moreno-Herrero, F.; de Pablo, P. J.; Colchero, J.; Gómez-Herrero, J.; Baró, A. M.

    2002-01-01

    A fundamental requirement for a molecule to be considered a molecular wire (MW) is the ability to transport electrical charge with a reasonably low resistance. We have carried out two experiments that measure first, the charge transfer from an electrode to the molecule, and second, the dielectric response of the MW. The latter experiment requires no contacts to either end of the molecule. From our experiments we conclude that adsorbed individual DNA molecules have a resistivity similar to mica, glass, and silicon oxide substrates. Therefore adsorbed DNA is not a conductor, and it should not be considered as a viable candidate for MW applications. Parallel studies on other nanowires, including single-walled carbon nanotubes, showed conductivity as expected. PMID:12070346

  18. Eigensystem realization algorithm modal identification experiences with mini-mast

    NASA Technical Reports Server (NTRS)

    Pappa, Richard S.; Schenk, Axel; Noll, Christopher

    1992-01-01

    This paper summarizes work performed under a collaborative research effort between the National Aeronautics and Space Administration (NASA) and the German Aerospace Research Establishment (DLR, Deutsche Forschungsanstalt fur Luft- und Raumfahrt). The objective is to develop and demonstrate system identification technology for future large space structures. Recent experiences using the Eigensystem Realization Algorithm (ERA), for modal identification of Mini-Mast, are reported. Mini-Mast is a 20 m long deployable space truss used for structural dynamics and active vibration-control research at the Langley Research Center. A comprehensive analysis of 306 frequency response functions (3 excitation forces and 102 displacement responses) was performed. Emphasis is placed on two topics of current research: (1) gaining an improved understanding of ERA performance characteristics (theory vs. practice); and (2) developing reliable techniques to improve identification results for complex experimental data. Because of nonlinearities and numerous local modes, modal identification of Mini-Mast proved to be surprisingly difficult. Methods were available, ERA, for obtaining detailed, high-confidence results.

  19. Experiences with an adaptive mesh refinement algorithm in numerical relativity.

    NASA Astrophysics Data System (ADS)

    Choptuik, M. W.

    An implementation of the Berger/Oliger mesh refinement algorithm for a model problem in numerical relativity is described. The principles of operation of the method are reviewed and its use in conjunction with leap-frog schemes is considered. The performance of the algorithm is illustrated with results from a study of the Einstein/massless scalar field equations in spherical symmetry.

  20. A field experiment shows that subtle linguistic cues might not affect voter behavior.

    PubMed

    Gerber, Alan S; Huber, Gregory A; Biggers, Daniel R; Hendry, David J

    2016-06-28

    One of the most important recent developments in social psychology is the discovery of minor interventions that have large and enduring effects on behavior. A leading example of this class of results is in the work by Bryan et al. [Bryan CJ, Walton GM, Rogers T, Dweck CS (2011) Proc Natl Acad Sci USA 108(31):12653-12656], which shows that administering a set of survey items worded so that subjects think of themselves as voters (noun treatment) rather than as voting (verb treatment) substantially increases political participation (voter turnout) among subjects. We revisit these experiments by replicating and extending their research design in a large-scale field experiment. In contrast to the 11 to 14% point greater turnout among those exposed to the noun rather than the verb treatment reported in the work by Bryan et al., we find no statistically significant difference in turnout between the noun and verb treatments (the point estimate of the difference is approximately zero). Furthermore, when we benchmark these treatments against a standard get out the vote message, we estimate that both are less effective at increasing turnout than a much shorter basic mobilization message. In our conclusion, we detail how our study differs from the work by Bryan et al. and discuss how our results might be interpreted.

  1. Designing, Visualizing, and Discussing Algorithms within a CS 1 Studio Experience: An Empirical Study

    ERIC Educational Resources Information Center

    Hundhausen, Christopher D.; Brown, Jonathan L.

    2008-01-01

    Within the context of an introductory CS1 unit on algorithmic problem-solving, we are exploring the pedagogical value of a novel active learning activity--the "studio experience"--that actively engages learners with algorithm visualization technology. In a studio experience, student pairs are tasked with (a) developing a solution to an algorithm…

  2. Experiments of two pupil lateral motion tracking algorithms using a Shack-Hartmann sensor

    NASA Astrophysics Data System (ADS)

    Dai, Xiaolin; Hippler, Stefan; Gendron, Eric

    2017-01-01

    Pupil stability is one of the factors which limit the performance and operational stability of adaptive optics (AO) systems. This paper analyses two pupil-tracking methods to measure the lateral pupil shift: the first one utilizes the fluxes in all outer edge sub-apertures of a Shack-Hartmann sensor and the second one utilizes the real-time interaction matrix used in an AO system. Experiments with 9×9 Shack-Hartmann sensor are conducted to verify both pupil-tracking algorithms. The results show that both algorithms are effective, after two correction steps, the residual pupil shift is reduced to less than 5% of a Shack-Hartmann sub-aperture.

  3. PSO algorithm enhanced with Lozi Chaotic Map - Tuning experiment

    SciTech Connect

    Pluhacek, Michal; Senkerik, Roman; Zelinka, Ivan

    2015-03-10

    In this paper it is investigated the effect of tuning of control parameters of the Lozi Chaotic Map employed as a chaotic pseudo-random number generator for the particle swarm optimization algorithm. Three different benchmark functions are selected from the IEEE CEC 2013 competition benchmark set. The Lozi map is extensively tuned and the performance of PSO is evaluated.

  4. Experiences with the PGAPack Parallel Genetic Algorithm library

    SciTech Connect

    Levine, D.; Hallstrom, P.; Noelle, D.; Walenz, B.

    1997-07-01

    PGAPack is the first widely distributed parallel genetic algorithm library. Since its release, several thousand copies have been distributed worldwide to interested users. In this paper we discuss the key components of the PGAPack design philosophy and present a number of application examples that use PGAPack.

  5. F-18 SRA closeup of nose cap showing Advanced L-Probe Air Data Integration experiment

    NASA Technical Reports Server (NTRS)

    1997-01-01

    This L-shaped probe mounted on the forward fuselage of a modified F-18 Systems Research Aircraft was the focus of an air data collection experiment flown at NASA's Dryden Flight Research Center, Edwards, California. The Advanced L-Probe Air Data Integration (ALADIN) experiment focused on providing pilots with angle-of-attack and angle-of-sideslip information as well as traditional airspeed and altitude data from a single system. For the experiment, the probes--one mounted on either side of the F-18's forward fuselage--were hooked to a series of four transducers, which relayed pressure measurements to an on-board research computer.

  6. A statistical algorithm showing coenzyme Q10 and citrate synthase as biomarkers for mitochondrial respiratory chain enzyme activities.

    PubMed

    Yubero, D; Adin, A; Montero, R; Jou, C; Jiménez-Mallebrera, C; García-Cazorla, A; Nascimento, A; O'Callaghan, M M; Montoya, J; Gort, L; Navas, P; Ribes, A; Ugarte, M D; Artuch, R

    2016-12-01

    Laboratory data interpretation for the assessment of complex biological systems remains a great challenge, as occurs in mitochondrial function research studies. The classical biochemical data interpretation of patients versus reference values may be insufficient, and in fact the current classifications of mitochondrial patients are still done on basis of probability criteria. We have developed and applied a mathematic agglomerative algorithm to search for correlations among the different biochemical variables of the mitochondrial respiratory chain in order to identify populations displaying correlation coefficients >0.95. We demonstrated that coenzyme Q10 may be a better biomarker of mitochondrial respiratory chain enzyme activities than the citrate synthase activity. Furthermore, the application of this algorithm may be useful to re-classify mitochondrial patients or to explore associations among other biochemical variables from different biological systems.

  7. Lullaby Light Shows: Everyday Musical Experience among Under-Two-Year-Olds

    ERIC Educational Resources Information Center

    Young, Susan

    2008-01-01

    This article reports on information gathered from a set of interviews carried out with 88 mothers of under-two-year-olds. The interviews enquired about the everyday musical experiences of their babies and very young children in the home. From the process of analysis, the responses to the interviews were grouped into three main areas: musical…

  8. Video tracking algorithm of long-term experiment using stand-alone recording system

    NASA Astrophysics Data System (ADS)

    Chen, Yu-Jen; Li, Yan-Chay; Huang, Ke-Nung; Jen, Sun-Lon; Young, Ming-Shing

    2008-08-01

    Many medical and behavioral applications require the ability to monitor and quantify the behavior of small animals. In general these animals are confined in small cages. Often these situations involve very large numbers of cages. Modern research facilities commonly monitor simultaneously thousands of animals over long periods of time. However, conventional systems require one personal computer per monitoring platform, which is too complex, expensive, and increases power consumption for large laboratory applications. This paper presents a simplified video tracking algorithm for long-term recording using a stand-alone system. The feature of the presented tracking algorithm revealed that computation speed is very fast data storage requirements are small, and hardware requirements are minimal. The stand-alone system automatically performs tracking and saving acquired data to a secure digital card. The proposed system is designed for video collected at a 640×480 pixel with 16 bit color resolution. The tracking result is updated every 30 frames/s. Only the locomotive data are stored. Therefore, the data storage requirements could be minimized. In addition, detection via the designed algorithm uses the Cb and Cr values of a colored marker affixed to the target to define the tracked position and allows multiobject tracking against complex backgrounds. Preliminary experiment showed that such tracking information stored by the portable and stand-alone system could provide comprehensive information on the animal's activity.

  9. "Binge" drinking experience in adolescent mice shows sex differences and elevated ethanol intake in adulthood.

    PubMed

    Strong, Moriah N; Yoneyama, Naomi; Fretwell, Andrea M; Snelling, Chris; Tanchuck, Michelle A; Finn, Deborah A

    2010-06-01

    Binge drinking, defined as achieving blood ethanol concentrations (BEC) of 80 mg%, has been increasing in adolescents and was reported to predispose later physical dependence. The present experiments utilized an animal model of binge drinking to compare the effect of ethanol "binge" experience during adolescence or adulthood on subsequent ethanol intake in male and female C57BL/6 mice. Adolescent and adult mice were initially exposed to the scheduled high alcohol consumption procedure, which produces BECs that exceed the levels for binge drinking following a 30-min ethanol session every third day. Ethanol intake and BECs were significantly higher in the adolescent ( approximately 3 g/kg, 199 mg%) versus adult ( approximately 2 g/kg, 135 mg%) mice during the first three ethanol sessions, but were more equivalent during the final two ethanol sessions (1.85-2.0 g/kg, 129-143 mg%). Then, separate groups of the ethanol-experienced mice were tested with ethanol naïve adolescent and adult mice for 2-h limited access (10% and 20% solutions) or 24-h (5%, 10% and 20% solutions) ethanol preference drinking. Limited access ethanol intake was significantly higher in female versus male mice, but was not altered by age or ethanol experience. In contrast, 24-h ethanol intake was significantly higher in the adolescent versus adult mice and in female versus male mice. Furthermore, binge drinking experience in the adolescent mice significantly increased subsequent ethanol intake, primarily due to intake in female mice. Thus, adolescent binge drinking significantly increased unlimited ethanol intake during adulthood, with female mice more susceptible to this effect.

  10. Real Science: MIT Reality Show Tracks Experiences, Frustrations of Chemistry Lab Students

    ERIC Educational Resources Information Center

    Cooper, Kenneth J.

    2012-01-01

    A reality show about a college course--a chemistry class no less? That's what "ChemLab Boot Camp" is. The 14-part series of short videos is being released one episode at a time on the online learning site of the Massachusetts Institute of Technology. The novel show follows a diverse group of 14 freshmen as they struggle to master the…

  11. A ranking algorithm for spacelab crew and experiment scheduling

    NASA Technical Reports Server (NTRS)

    Grone, R. D.; Mathis, F. H.

    1980-01-01

    The problem of obtaining an optimal or near optimal schedule for scientific experiments to be performed on Spacelab missions is addressed. The current capabilities in this regard are examined and a method of ranking experiments in order of difficulty is developed to support the existing software. Experimental data is obtained from applying this method to the sets of experiments corresponding to Spacelab mission 1, 2, and 3. Finally, suggestions are made concerning desirable modifications and features of second generation software being developed for this problem.

  12. A common anterior insula representation of disgust observation, experience and imagination shows divergent functional connectivity pathways.

    PubMed

    Jabbi, Mbemba; Bastiaansen, Jojanneke; Keysers, Christian

    2008-08-13

    Similar brain regions are involved when we imagine, observe and execute an action. Is the same true for emotions? Here, the same subjects were scanned while they (a) experience, (b) view someone else experiencing and (c) imagine experiencing gustatory emotions (through script-driven imagery). Capitalizing on the fact that disgust is repeatedly inducible within the scanner environment, we scanned the same participants while they (a) view actors taste the content of a cup and look disgusted (b) tasted unpleasant bitter liquids to induce disgust, and (c) read and imagine scenarios involving disgust and their neutral counterparts. To reduce habituation, we inter-mixed trials of positive emotions in all three scanning experiments. We found voxels in the anterior Insula and adjacent frontal operculum to be involved in all three modalities of disgust, suggesting that simulation in the context of social perception and mental imagery of disgust share a common neural substrates. Using effective connectivity, this shared region however was found to be embedded in distinct functional circuits during the three modalities, suggesting why observing, imagining and experiencing an emotion feels so different.

  13. Integrative Analysis of Disease Signatures Shows Inflammation Disrupts Juvenile Experience-Dependent Cortical Plasticity

    PubMed Central

    Smith, Milo R.; Burman, Poromendro

    2016-01-01

    Throughout childhood and adolescence, periods of heightened neuroplasticity are critical for the development of healthy brain function and behavior. Given the high prevalence of neurodevelopmental disorders, such as autism, identifying disruptors of developmental plasticity represents an essential step for developing strategies for prevention and intervention. Applying a novel computational approach that systematically assessed connections between 436 transcriptional signatures of disease and multiple signatures of neuroplasticity, we identified inflammation as a common pathological process central to a diverse set of diseases predicted to dysregulate plasticity signatures. We tested the hypothesis that inflammation disrupts developmental cortical plasticity in vivo using the mouse ocular dominance model of experience-dependent plasticity in primary visual cortex. We found that the administration of systemic lipopolysaccharide suppressed plasticity during juvenile critical period with accompanying transcriptional changes in a particular set of molecular regulators within primary visual cortex. These findings suggest that inflammation may have unrecognized adverse consequences on the postnatal developmental trajectory and indicate that treating inflammation may reduce the burden of neurodevelopmental disorders. PMID:28101530

  14. Why do people show minimal knowledge updating with task experience: inferential deficit or experimental artifact?

    PubMed

    Hertzog, Christopher; Price, Jodi; Burpee, Ailis; Frentzel, William J; Feldstein, Simeon; Dunlosky, John

    2009-01-01

    Students generally do not have highly accurate knowledge about strategy effectiveness for learning, such as that imagery is superior to rote repetition. During multiple study-test trials using both strategies, participants' predictions about performance on List 2 do not markedly differ for the two strategies, even though List 1 recall is substantially greater for imagery. Two experiments evaluated whether such deficits in knowledge updating about the strategy effects were due to an experimental artifact or to inaccurate inferences about the effects the strategies had on recall. Participants studied paired associates on two study-test trials--they were instructed to study half using imagery and half using rote repetition. Metacognitive judgements tapped the quality of inferential processes about the strategy effects during the List 1 test and tapped gains in knowledge about the strategies across lists. One artifactual explanation--noncompliance with strategy instructions--was ruled out, whereas manipulations aimed at supporting the data available to inferential processes improved but did not fully repair knowledge updating.

  15. Why Do People Show Minimal Knowledge Updating with Task Experience: Inferential Deficit or Experimental Artifact?

    PubMed Central

    Hertzog, Christopher; Price, Jodi; Burpee, Ailis; Frentzel, William J.; Feldstein, Simeon; Dunlosky, John

    2008-01-01

    Students generally do not have highly accurate knowledge about strategy effectiveness for learning, such as that imagery is superior to rote repetition. During multiple study-test trials using both strategies, participants’ predictions about performance on List 2 do not markedly differ for the two strategies, even though List 1 recall is substantially greater for imagery. Two experiments evaluated whether such deficits in knowledge updating about the strategy effects were due to an experimental artifact or to inaccurate inferences about the effects the strategies had on recall. Participants studied paired associates on two study-test trials—they were instructed to study half using imagery and half using rote repetition. Metacognitive judgments tapped the quality of inferential processes about the strategy effects during the List 1 test and tapped gains in knowledge about the strategies across lists. One artifactual explanation –noncompliance with strategy instructions -- was ruled out, whereas manipulations aimed at supporting the data available to inferential processes improved but did not fully repair knowledge updating. PMID:18609379

  16. Brain regions that show repetition suppression and enhancement: A meta-analysis of 137 neuroimaging experiments.

    PubMed

    Kim, Hongkeun

    2017-04-01

    Repetition suppression and enhancement refer to the reduction and increase in the neural responses for repeated rather than novel stimuli, respectively. This study provides a meta-analysis of the effects of repetition suppression and enhancement, restricting the data used to that involving fMRI/PET, visual stimulus presentation, and healthy participants. The major findings were as follows. First, the global topography of the repetition suppression effects was strikingly similar to that of the "subsequent memory" effects, indicating that the mechanism for repetition suppression is the reduced engagement of an encoding system. The lateral frontal cortex effects involved the frontoparietal control network regions anteriorly and the dorsal attention network regions posteriorly. The left fusiform cortex effects predominantly involved the dorsal attention network regions, whereas the right fusiform cortex effects mainly involved the visual network regions. Second, the category-specific meta-analyses and their comparisons indicated that most parts of the alleged category-specific regions showed repetition suppression for more than one stimulus category. In this regard, these regions may not be "dedicated cortical modules," but are more likely parts of multiple overlapping large-scale maps of simple features. Finally, the global topography of the repetition enhancement effects was similar to that of the "retrieval success" effects, suggesting that the mechanism for repetition enhancement is voluntary or involuntary explicit retrieval during an implicit memory task. Taken together, these results clarify the network affiliations of the regions showing reliable repetition suppression and enhancement effects and contribute to the theoretical interpretations of the local and global topography of these two effects. Hum Brain Mapp 38:1894-1913, 2017. © 2017 Wiley Periodicals, Inc.

  17. Finite element algorithm reproducing hip squeak measured in experiment

    NASA Astrophysics Data System (ADS)

    Kang, Jaeyoung

    2017-04-01

    In this study, the frequency spectrum of squeak noise in hip joint system is measured in experiment. The numerical reproduction of hip squeak signal involves the formulation of the finite element geometry, the analytical contact kinematics such as Hertz theory and Coulomb's law and the mode-discretization. For general approach, the contact kinematics are analytically modeled to easily adjust the contact location, the contact area, the rotation direction, the pressure distribution, the friction law, and so on. Furthermore the friction stress vectors act on the 3-dimensional spherical contact surfaces where they can be divided into the steady-sliding and its transverse slip directions. Numerical calculations for the various contact parameters are conducted to investigate the possibility of hip squeak occurrence and the nonlinear oscillations after the onset of squeak are also solved. In the transient analysis, the periodic limit cycle of hip squeaking is shown to be the stick-slip type oscillation. Then the numerical frequency spectrum is qualitatively compared with hip squeak signal measured in experiment. The stick-slip oscillation during hip squeaking and its contact behavior will be also discussed over the contact area within one period.

  18. Multiagent pursuit-evasion games: Algorithms and experiments

    NASA Astrophysics Data System (ADS)

    Kim, Hyounjin

    Deployment of intelligent agents has been made possible through advances in control software, microprocessors, sensor/actuator technology, communication technology, and artificial intelligence. Intelligent agents now play important roles in many applications where human operation is too dangerous or inefficient. There is little doubt that the world of the future will be filled with intelligent robotic agents employed to autonomously perform tasks, or embedded in systems all around us, extending our capabilities to perceive, reason and act, and replacing human efforts. There are numerous real-world applications in which a single autonomous agent is not suitable and multiple agents are required. However, after years of active research in multi-agent systems, current technology is still far from achieving many of these real-world applications. Here, we consider the problem of deploying a team of unmanned ground vehicles (UGV) and unmanned aerial vehicles (UAV) to pursue a second team of UGV evaders while concurrently building a map in an unknown environment. This pursuit-evasion game encompasses many of the challenging issues that arise in operations using intelligent multi-agent systems. We cast the problem in a probabilistic game theoretic framework and consider two computationally feasible pursuit policies: greedy and global-max. We also formulate this probabilistic pursuit-evasion game as a partially observable Markov decision process and employ a policy search algorithm to obtain a good pursuit policy from a restricted class of policies. The estimated value of this policy is guaranteed to be uniformly close to the optimal value in the given policy class under mild conditions. To implement this scenario on real UAVs and UGVs, we propose a distributed hierarchical hybrid system architecture which emphasizes the autonomy of each agent yet allows for coordinated team efforts. We then describe our implementation on a fleet of UGVs and UAVs, detailing components such

  19. Online Tracking Algorithms on GPUs for the P̅ANDA Experiment at FAIR

    NASA Astrophysics Data System (ADS)

    Bianchi, L.; Herten, A.; Ritman, J.; Stockmanns, T.; Adinetz, A.; Kraus, J.; Pleiter, D.

    2015-12-01

    P̅ANDA is a future hadron and nuclear physics experiment at the FAIR facility in construction in Darmstadt, Germany. In contrast to the majority of current experiments, PANDA's strategy for data acquisition is based on event reconstruction from free-streaming data, performed in real time entirely by software algorithms using global detector information. This paper reports the status of the development of algorithms for the reconstruction of charged particle tracks, optimized online data processing applications, using General-Purpose Graphic Processing Units (GPU). Two algorithms for trackfinding, the Triplet Finder and the Circle Hough, are described, and details of their GPU implementations are highlighted. Average track reconstruction times of less than 100 ns are obtained running the Triplet Finder on state-of- the-art GPU cards. In addition, a proof-of-concept system for the dispatch of data to tracking algorithms using Message Queues is presented.

  20. Trigger and Reconstruction Algorithms for the Japanese Experiment Module- Extreme Universe Space Observatory (JEM-EUSO)

    NASA Technical Reports Server (NTRS)

    Adams, J. H., Jr.; Andreev, Valeri; Christl, M. J.; Cline, David B.; Crawford, Hank; Judd, E. G.; Pennypacker, Carl; Watts, J. W.

    2007-01-01

    The JEM-EUSO collaboration intends to study high energy cosmic ray showers using a large downward looking telescope mounted on the Japanese Experiment Module of the International Space Station. The telescope focal plane is instrumented with approx.300k pixels operating as a digital camera, taking snapshots at approx. 1MHz rate. We report an investigation of the trigger and reconstruction efficiency of various algorithms based on time and spatial analysis of the pixel images. Our goal is to develop trigger and reconstruction algorithms that will allow the instrument to detect energies low enough to connect smoothly to ground-based observations.

  1. Vertigo in childhood: proposal for a diagnostic algorithm based upon clinical experience.

    PubMed

    Casani, A P; Dallan, I; Navari, E; Sellari Franceschini, S; Cerchiai, N

    2015-06-01

    The aim of this paper is to analyse, after clinical experience with a series of patients with established diagnoses and review of the literature, all relevant anamnestic features in order to build a simple diagnostic algorithm for vertigo in childhood. This study is a retrospective chart review. A series of 37 children underwent complete clinical and instrumental vestibular examination. Only neurological disorders or genetic diseases represented exclusion criteria. All diagnoses were reviewed after applying the most recent diagnostic guidelines. In our experience, the most common aetiology for dizziness is vestibular migraine (38%), followed by acute labyrinthitis/neuritis (16%) and somatoform vertigo (16%). Benign paroxysmal vertigo was diagnosed in 4 patients (11%) and paroxysmal torticollis was diagnosed in a 1-year-old child. In 8% (3 patients) of cases, the dizziness had a post-traumatic origin: 1 canalolithiasis of the posterior semicircular canal and 2 labyrinthine concussions, respectively. Menière's disease was diagnosed in 2 cases. A bilateral vestibular failure of unknown origin caused chronic dizziness in 1 patient. In conclusion, this algorithm could represent a good tool for guiding clinical suspicion to correct diagnostic assessment in dizzy children where no neurological findings are detectable. The algorithm has just a few simple steps, based mainly on two aspects to be investigated early: temporal features of vertigo and presence of hearing impairment. A different algorithm has been proposed for cases in which a traumatic origin is suspected.

  2. F-18 SRA closeup of nose cap showing L-Probe experiment and standard air data sensors

    NASA Technical Reports Server (NTRS)

    1997-01-01

    This under-the-nose view of a modified F-18 Systems Research Aircraft at NASA's Dryden Flight Research Center, Edwards, California, shows three critical components of the aircraft's air data systems which are mounted on both sides of the forward fuselage. Furthest forward are two L-probes that were the focus of the recent Advanced L-probe Air Data Integration (ALADIN) experiment. Behind the L-probes are angle-of-attack vanes, while below them are the aircraft's standard pitot-static air data probes. The ALADIN experiment focused on providing pilots with angle-of-attack and angle-of-sideslip air data as well as traditional airspeed and altitude information, all from a single system. Once fully developed, the new L-probes have the potential to give pilots more accurate air data information with less hardware.

  3. Hill-type muscle model parameters determined from experiments on single muscles show large animal-to-animal variation.

    PubMed

    Blümel, Marcus; Guschlbauer, Christoph; Daun-Gruhn, Silvia; Hooper, Scott L; Büschges, Ansgar

    2012-11-01

    Models built using mean data can represent only a very small percentage, or none, of the population being modeled, and produce different activity than any member of it. Overcoming this "averaging" pitfall requires measuring, in single individuals in single experiments, all of the system's defining characteristics. We have developed protocols that allow all the parameters in the curves used in typical Hill-type models (passive and active force-length, series elasticity, force-activation, force-velocity) to be determined from experiments on individual stick insect muscles (Blümel et al. 2012a). A requirement for means to not well represent the population is that the population shows large variation in its defining characteristics. We therefore used these protocols to measure extensor muscle defining parameters in multiple animals. Across-animal variability in these parameters can be very large, ranging from 1.3- to 17-fold. This large variation is consistent with earlier data in which extensor muscle responses to identical motor neuron driving showed large animal-to-animal variability (Hooper et al. 2006), and suggests accurate modeling of extensor muscles requires modeling individual-by-individual. These complete characterizations of individual muscles also allowed us to test for parameter correlations. Two parameter pairs significantly co-varied, suggesting that a simpler model could as well reproduce muscle response.

  4. Data Association and Bullet Tracking Algorithms for the Fight Sight Experiment

    SciTech Connect

    Breitfeller, E; Roberts, R

    2005-10-07

    Previous LLNL investigators developed a bullet and projectile tracking system over a decade ago. Renewed interest in the technology has spawned research that culminated in a live-fire experiment, called Fight Sight, in September 2005. The experiment was more complex than previous LLNL bullet tracking experiments in that it included multiple shooters with simultaneous fire, new sensor-shooter geometries, large amounts of optical clutter, and greatly increased sensor-shooter distances. This presentation describes the data association and tracking algorithms for the Fight Sight experiment. Image processing applied to the imagery yields a sequence of bullet features which are input to a data association routine. The data association routine matches features with existing tracks, or initializes new tracks as needed. A Kalman filter is used to smooth and extrapolate existing tracks. The Kalman filter is also used to back-track bullets to their point of origin, thereby revealing the location of the shooter. It also provides an error ellipse for each shooter, quantifying the uncertainty of shooter location. In addition to describing the data association and tracking algorithms, several examples from the Fight Sight experiment are also presented.

  5. Experiences on developing digital down conversion algorithms using Xilinx system generator

    NASA Astrophysics Data System (ADS)

    Xu, Chengfa; Yuan, Yuan; Zhao, Lizhi

    2013-07-01

    The Digital Down Conversion (DDC) algorithm is a classical signal processing method which is widely used in radar and communication systems. In this paper, the DDC function is implemented by Xilinx System Generator tool on FPGA. System Generator is an FPGA design tool provided by Xilinx Inc and MathWorks Inc. It is very convenient for programmers to manipulate the design and debug the function, especially for the complex algorithm. Through the developing process of DDC function based on System Generator, the results show that System Generator is a very fast and efficient tool for FPGA design.

  6. An Experience Oriented-Convergence Improved Gravitational Search Algorithm for Minimum Variance Distortionless Response Beamforming Optimum

    PubMed Central

    Darzi, Soodabeh; Tiong, Sieh Kiong; Tariqul Islam, Mohammad; Rezai Soleymanpour, Hassan; Kibria, Salehin

    2016-01-01

    An experience oriented-convergence improved gravitational search algorithm (ECGSA) based on two new modifications, searching through the best experiments and using of a dynamic gravitational damping coefficient (α), is introduced in this paper. ECGSA saves its best fitness function evaluations and uses those as the agents’ positions in searching process. In this way, the optimal found trajectories are retained and the search starts from these trajectories, which allow the algorithm to avoid the local optimums. Also, the agents can move faster in search space to obtain better exploration during the first stage of the searching process and they can converge rapidly to the optimal solution at the final stage of the search process by means of the proposed dynamic gravitational damping coefficient. The performance of ECGSA has been evaluated by applying it to eight standard benchmark functions along with six complicated composite test functions. It is also applied to adaptive beamforming problem as a practical issue to improve the weight vectors computed by minimum variance distortionless response (MVDR) beamforming technique. The results of implementation of the proposed algorithm are compared with some well-known heuristic methods and verified the proposed method in both reaching to optimal solutions and robustness. PMID:27399904

  7. An Experience Oriented-Convergence Improved Gravitational Search Algorithm for Minimum Variance Distortionless Response Beamforming Optimum.

    PubMed

    Darzi, Soodabeh; Tiong, Sieh Kiong; Tariqul Islam, Mohammad; Rezai Soleymanpour, Hassan; Kibria, Salehin

    2016-01-01

    An experience oriented-convergence improved gravitational search algorithm (ECGSA) based on two new modifications, searching through the best experiments and using of a dynamic gravitational damping coefficient (α), is introduced in this paper. ECGSA saves its best fitness function evaluations and uses those as the agents' positions in searching process. In this way, the optimal found trajectories are retained and the search starts from these trajectories, which allow the algorithm to avoid the local optimums. Also, the agents can move faster in search space to obtain better exploration during the first stage of the searching process and they can converge rapidly to the optimal solution at the final stage of the search process by means of the proposed dynamic gravitational damping coefficient. The performance of ECGSA has been evaluated by applying it to eight standard benchmark functions along with six complicated composite test functions. It is also applied to adaptive beamforming problem as a practical issue to improve the weight vectors computed by minimum variance distortionless response (MVDR) beamforming technique. The results of implementation of the proposed algorithm are compared with some well-known heuristic methods and verified the proposed method in both reaching to optimal solutions and robustness.

  8. Internally labeled Cy3/Cy5 DNA constructs show greatly enhanced photo-stability in single-molecule FRET experiments

    PubMed Central

    Lee, Wonbae; von Hippel, Peter H.; Marcus, Andrew H.

    2014-01-01

    DNA constructs labeled with cyanine fluorescent dyes are important substrates for single-molecule (sm) studies of the functional activity of protein–DNA complexes. We previously studied the local DNA backbone fluctuations of replication fork and primer–template DNA constructs labeled with Cy3/Cy5 donor–acceptor Förster resonance energy transfer (FRET) chromophore pairs and showed that, contrary to dyes linked ‘externally’ to the bases with flexible tethers, direct ‘internal’ (and rigid) insertion of the chromophores into the sugar-phosphate backbones resulted in DNA constructs that could be used to study intrinsic and protein-induced DNA backbone fluctuations by both smFRET and sm Fluorescent Linear Dichroism (smFLD). Here we show that these rigidly inserted Cy3/Cy5 chromophores also exhibit two additional useful properties, showing both high photo-stability and minimal effects on the local thermodynamic stability of the DNA constructs. The increased photo-stability of the internal labels significantly reduces the proportion of false positive smFRET conversion ‘background’ signals, thereby simplifying interpretations of both smFRET and smFLD experiments, while the decreased effects of the internal probes on local thermodynamic stability also make fluctuations sensed by these probes more representative of the unperturbed DNA structure. We suggest that internal probe labeling may be useful in studies of many DNA–protein interaction systems. PMID:24627223

  9. Experiment for validation of fluid-structure interaction models and algorithms.

    PubMed

    Hessenthaler, A; Gaddum, N R; Holub, O; Sinkus, R; Röhrle, O; Nordsletten, D

    2016-11-04

    In this paper a fluid-structure interaction (FSI) experiment is presented. The aim of this experiment is to provide a challenging yet easy-to-setup FSI test case that addresses the need for rigorous testing of FSI algorithms and modeling frameworks. Steady-state and periodic steady-state test cases with constant and periodic inflow were established. Focus of the experiment is on biomedical engineering applications with flow being in the laminar regime with Reynolds numbers 1283 and 651. Flow and solid domains were defined using computer-aided design (CAD) tools. The experimental design aimed at providing a straightforward boundary condition definition. Material parameters and mechanical response of a moderately viscous Newtonian fluid and a nonlinear incompressible solid were experimentally determined. A comprehensive data set was acquired by using magnetic resonance imaging to record the interaction between the fluid and the solid, quantifying flow and solid motion.

  10. Pressurized laboratory experiments show no stable carbon isotope fractionation of methane during gas hydrate dissolution and dissociation.

    PubMed

    Lapham, Laura L; Wilson, Rachel M; Chanton, Jeffrey P

    2012-01-15

    The stable carbon isotopic ratio of methane (δ(13)C-CH(4)) recovered from marine sediments containing gas hydrate is often used to infer the gas source and associated microbial processes. This is a powerful approach because of distinct isotopic fractionation patterns associated with methane production by biogenic and thermogenic pathways and microbial oxidation. However, isotope fractionations due to physical processes, such as hydrate dissolution, have not been fully evaluated. We have conducted experiments to determine if hydrate dissolution or dissociation (two distinct physical processes) results in isotopic fractionation. In a pressure chamber, hydrate was formed from a methane gas source at 2.5 MPa and 4 °C, well within the hydrate stability field. Following formation, the methane source was removed while maintaining the hydrate at the same pressure and temperature which stimulated hydrate dissolution. Over the duration of two dissolution experiments (each ~20-30 days), water and headspace samples were periodically collected and measured for methane concentrations and δ(13)C-CH(4) while the hydrate dissolved. For both experiments, the methane concentrations in the pressure chamber water and headspace increased over time, indicating that the hydrate was dissolving, but the δ(13)C-CH(4) values showed no significant trend and remained constant, within 0.5‰. This lack of isotope change over time indicates that there is no fractionation during hydrate dissolution. We also investigated previous findings that little isotopic fractionation occurs when the gas hydrate dissociates into gas bubbles and water due to the release of pressure. Over a 2.5 MPa pressure drop, the difference in the δ(13)C-CH(4) was <0.3‰. We have therefore confirmed that there is no isotope fractionation when the gas hydrate dissociates and demonstrated that there is no fractionation when the hydrate dissolves. Therefore, measured δ(13)C-CH(4) values near gas hydrates are not affected

  11. Analysis of soil moisture extraction algorithm using data from aircraft experiments

    NASA Technical Reports Server (NTRS)

    Burke, H. H. K.; Ho, J. H.

    1981-01-01

    A soil moisture extraction algorithm is developed using a statistical parameter inversion method. Data sets from two aircraft experiments are utilized for the test. Multifrequency microwave radiometric data surface temperature, and soil moisture information are contained in the data sets. The surface and near surface ( or = 5 cm) soil moisture content can be extracted with accuracy of approximately 5% to 6% for bare fields and fields with grass cover by using L, C, and X band radiometer data. This technique is used for handling large amounts of remote sensing data from space.

  12. SU-E-T-344: Validation and Clinical Experience of Eclipse Electron Monte Carlo Algorithm (EMC)

    SciTech Connect

    Pokharel, S; Rana, S

    2014-06-01

    Purpose: The purpose of this study is to validate Eclipse Electron Monte Carlo (Algorithm for routine clinical uses. Methods: The PTW inhomogeneity phantom (T40037) with different combination of heterogeneous slabs has been CT-scanned with Philips Brilliance 16 slice scanner. The phantom contains blocks of Rando Alderson materials mimicking lung, Polystyrene (Tissue), PTFE (Bone) and PMAA. The phantom has 30×30×2.5 cm base plate with 2cm recesses to insert inhomogeneity. The detector systems used in this study are diode, tlds and Gafchromic EBT2 films. The diode and tlds were included in CT scans. The CT sets are transferred to Eclipse treatment planning system. Several plans have been created with Eclipse Monte Carlo (EMC) algorithm 11.0.21. Measurements have been carried out in Varian TrueBeam machine for energy from 6–22mev. Results: The measured and calculated doses agreed very well for tissue like media. The agreement was reasonably okay for the presence of lung inhomogeneity. The point dose agreement was within 3.5% and Gamma passing rate at 3%/3mm was greater than 93% except for 6Mev(85%). The disagreement can reach as high as 10% in the presence of bone inhomogeneity. This is due to eclipse reporting dose to the medium as opposed to the dose to the water as in conventional calculation engines. Conclusion: Care must be taken when using Varian Eclipse EMC algorithm for dose calculation for routine clinical uses. The algorithm dose not report dose to water in which most of the clinical experiences are based on rather it just reports dose to medium directly. In the presence of inhomogeneity such as bone, the dose discrepancy can be as high as 10% or even more depending on the location of normalization point or volume. As Radiation oncology as an empirical science, care must be taken before using EMC reported monitor units for clinical uses.

  13. Coordinate Systems, Numerical Objects and Algorithmic Operations of Computational Experiment in Fluid Mechanics

    NASA Astrophysics Data System (ADS)

    Degtyarev, Alexander; Khramushin, Vasily

    2016-02-01

    The paper deals with the computer implementation of direct computational experiments in fluid mechanics, constructed on the basis of the approach developed by the authors. The proposed approach allows the use of explicit numerical scheme, which is an important condition for increasing the effciency of the algorithms developed by numerical procedures with natural parallelism. The paper examines the main objects and operations that let you manage computational experiments and monitor the status of the computation process. Special attention is given to a) realization of tensor representations of numerical schemes for direct simulation; b) realization of representation of large particles of a continuous medium motion in two coordinate systems (global and mobile); c) computing operations in the projections of coordinate systems, direct and inverse transformation in these systems. Particular attention is paid to the use of hardware and software of modern computer systems.

  14. Optimization of identity operation in NMR spectroscopy via genetic algorithm: Application to the TEDOR experiment

    NASA Astrophysics Data System (ADS)

    Manu, V. S.; Veglia, Gianluigi

    2016-12-01

    Identity operation in the form of π pulses is widely used in NMR spectroscopy. For an isolated single spin system, a sequence of even number of π pulses performs an identity operation, leaving the spin state essentially unaltered. For multi-spin systems, trains of π pulses with appropriate phases and time delays modulate the spin Hamiltonian to perform operations such as decoupling and recoupling. However, experimental imperfections often jeopardize the outcome, leading to severe losses in sensitivity. Here, we demonstrate that a newly designed Genetic Algorithm (GA) is able to optimize a train of π pulses, resulting in a robust identity operation. As proof-of-concept, we optimized the recoupling sequence in the transferred-echo double-resonance (TEDOR) pulse sequence, a key experiment in biological magic angle spinning (MAS) solid-state NMR for measuring multiple carbon-nitrogen distances. The GA modified TEDOR (GMO-TEDOR) experiment with improved recoupling efficiency results in a net gain of sensitivity up to 28% as tested on a uniformly 13C, 15N labeled microcrystalline ubiquitin sample. The robust identity operation achieved via GA paves the way for the optimization of several other pulse sequences used for both solid- and liquid-state NMR used for decoupling, recoupling, and relaxation experiments.

  15. Two-dimensional atmospheric transport and chemistry model - Numerical experiments with a new advection algorithm

    NASA Technical Reports Server (NTRS)

    Shia, Run-Lie; Ha, Yuk Lung; Wen, Jun-Shan; Yung, Yuk L.

    1990-01-01

    Extensive testing of the advective scheme proposed by Prather (1986) has been carried out in support of the California Institute of Technology-Jet Propulsion Laboratory two-dimensional model of the middle atmosphere. The original scheme is generalized to include higher-order moments. In addition, it is shown how well the scheme works in the presence of chemistry as well as eddy diffusion. Six types of numerical experiments including simple clock motion and pure advection in two dimensions have been investigated in detail. By comparison with analytic solutions, it is shown that the new algorithm can faithfully preserve concentration profiles, has essentially no numerical diffusion, and is superior to a typical fourth-order finite difference scheme.

  16. Study of 201 Non-Small Cell Lung Cancer Patients Given Stereotactic Ablative Radiation Therapy Shows Local Control Dependence on Dose Calculation Algorithm

    SciTech Connect

    Latifi, Kujtim; Oliver, Jasmine; Baker, Ryan; Dilling, Thomas J.; Stevens, Craig W.; Kim, Jongphil; Yue, Binglin; DeMarco, MaryLou; Zhang, Geoffrey G.; Moros, Eduardo G.; Feygelman, Vladimir

    2014-04-01

    Purpose: Pencil beam (PB) and collapsed cone convolution (CCC) dose calculation algorithms differ significantly when used in the thorax. However, such differences have seldom been previously directly correlated with outcomes of lung stereotactic ablative body radiation (SABR). Methods and Materials: Data for 201 non-small cell lung cancer patients treated with SABR were analyzed retrospectively. All patients were treated with 50 Gy in 5 fractions of 10 Gy each. The radiation prescription mandated that 95% of the planning target volume (PTV) receive the prescribed dose. One hundred sixteen patients were planned with BrainLab treatment planning software (TPS) with the PB algorithm and treated on a Novalis unit. The other 85 were planned on the Pinnacle TPS with the CCC algorithm and treated on a Varian linac. Treatment planning objectives were numerically identical for both groups. The median follow-up times were 24 and 17 months for the PB and CCC groups, respectively. The primary endpoint was local/marginal control of the irradiated lesion. Gray's competing risk method was used to determine the statistical differences in local/marginal control rates between the PB and CCC groups. Results: Twenty-five patients planned with PB and 4 patients planned with the CCC algorithms to the same nominal doses experienced local recurrence. There was a statistically significant difference in recurrence rates between the PB and CCC groups (hazard ratio 3.4 [95% confidence interval: 1.18-9.83], Gray's test P=.019). The differences (Δ) between the 2 algorithms for target coverage were as follows: ΔD99{sub GITV} = 7.4 Gy, ΔD99{sub PTV} = 10.4 Gy, ΔV90{sub GITV} = 13.7%, ΔV90{sub PTV} = 37.6%, ΔD95{sub PTV} = 9.8 Gy, and ΔD{sub ISO} = 3.4 Gy. GITV = gross internal tumor volume. Conclusions: Local control in patients receiving who were planned to the same nominal dose with PB and CCC algorithms were statistically significantly different. Possible alternative

  17. F-15B in flight showing Supersonic Natural Laminar Flow (SS-NLF) experiment attached vertically to t

    NASA Technical Reports Server (NTRS)

    1999-01-01

    In-flight photo of the F-15B equipped with the Supersonic Natural Laminar Flow (SS-NLF) experiment. During four research flights, laminar flow was achieved over 80 percent of the test wing at speeds approaching Mach 2. This was accomplished as the sole result of the shape of the wing, without the use of suction gloves, such as on the F-16XL. Laminar flow is a condition in which air passes over a wing in smooth layers, rather than being turbulent The greater the area of laminar flow, the lower the amount of friction drag on the wing, thus increasing an aircraft's range and fuel economy. Increasing the area of laminar flow on a wing has been the subject of research by engineers since the late 1940s, but substantial success has proven elusive. The SS-NLF experiment was intended to provide engineers with the data by which to design natural laminar flow wings.

  18. X-ray digital intra-oral tomosynthesis for quasi-three-dimensional imaging: system, reconstruction algorithm, and experiments

    NASA Astrophysics Data System (ADS)

    Li, Liang; Chen, Zhiqiang; Zhao, Ziran; Wu, Dufan

    2013-01-01

    At present, there are mainly three x-ray imaging modalities for dental clinical diagnosis: radiography, panorama and computed tomography (CT). We develop a new x-ray digital intra-oral tomosynthesis (IDT) system for quasi-three-dimensional dental imaging which can be seen as an intermediate modality between traditional radiography and CT. In addition to normal x-ray tube and digital sensor used in intra-oral radiography, IDT has a specially designed mechanical device to complete the tomosynthesis data acquisition. During the scanning, the measurement geometry is such that the sensor is stationary inside the patient's mouth and the x-ray tube moves along an arc trajectory with respect to the intra-oral sensor. Therefore, the projection geometry can be obtained without any other reference objects, which makes it be easily accepted in clinical applications. We also present a compressed sensing-based iterative reconstruction algorithm for this kind of intra-oral tomosynthesis. Finally, simulation and experiment were both carried out to evaluate this intra-oral imaging modality and algorithm. The results show that IDT has its potentiality to become a new tool for dental clinical diagnosis.

  19. Level 3 trigger algorithm and Hardware Platform for the HADES experiment

    NASA Astrophysics Data System (ADS)

    Kirschner, Daniel Georg; Agakishiev, Geydar; Liu, Ming; Perez, Tiago; Kühn, Wolfgang; Pechenov, Vladimir; Spataro, Stefano

    2009-01-01

    A next generation real time trigger method to improve the enrichment of lepton events in the High Acceptance DiElectron Spectrometer (HADES) trigger system has been developed. In addition, a flexible Hardware Platform (Gigabit Ethernet-Multi-Node, GE-MN) was developed to implement and test the trigger method. The trigger method correlates the ring information of the HADES Ring Imaging Cherenkov (RICH) detector with the fired wires (drift cells) of the HADES Mini Drift Chamber (MDC) detector. It is demonstrated that this Level 3 trigger method can enhance the number of events which contain leptons by a factor of up to 50 at efficiencies above 80%. The performance of the correlation method in terms of the events analyzed per second has been studied with the GE-MN prototype in a lab test setup by streaming previously recorded experiment data to the module. This paper is a compilation from Kirschner [Level 3 trigger algorithm and Hardware Platform for the HADES experiment, Ph.D. Thesis, II. Physikalisches Institut der Justus-Liebig-Universität Gießen, urn:nbn:de:hebis:26-opus-50784, October 2007 [1

  20. Perfluorocarbon Tracer Experiments on a 2 km Scale in Manchester Showing Ingress of Pollutants into a Building

    NASA Astrophysics Data System (ADS)

    Matthews, James; Wright, Matthew; Bacak, Asan; Silva, Hugo; Priestley, Michael; Martin, Damien; Percival, Carl; Shallcross, Dudley

    2016-04-01

    Cyclic perfluorocarbons (PFCs) have been used to measure the passage of air in urban and rural settings as they are chemically inert, non-toxic and have low background concentrations. The use of pre-concentrators and chemical ionisation gas chromatography enables concentrations of a few parts per quadrillion (ppq) to be measured in bag samples. Three PFC tracers were used in Manchester, UK in the summer of 2015 to map airflow in the city and ingress into buildings: perfluomethylcyclohexane (PMCH), perfluoro-2-4-dimethylcyclohexane (mPDMCH) and perfluoro-2-methyl-3-ethylpentene (PMEP). A known quantity of each PFC was released for 15 minutes from steel canisters using pre-prepared PFC mixtures. Release points were chosen to be upwind of the central sampling location (Simon Building, University of Manchester) and varied in distance up to 2.2 km. Six releases using one or three tracers in different configurations and under different conditions were undertaken in the summer. Three further experiments were conducted in the Autumn, to more closely investigate the rate of ingress and decay of tracer indoors. In each experiment, 10 litre samples were made over 30 minutes into Tedlar bags, starting at the same time the as PFC release. Samples were taken in 11 locations chosen from 15 identified areas including three in public parks, three outside within the University of Manchester area, seven inside and five outside of the Simon building and two outside a building nearby. For building measurements, receptors were placed inside the buildings on different floors; outside measurements were achieved through a sample line out of the window. Three of the sample positions inside the Simon building were paired with samplers outside to allow indoor-outdoor comparisons. PFC concentrations varied depending on location and height. The highest measured concentrations occurred when the tracer was released at sunrise; up to 330 ppq above background (11 ppq) of PMCH was measured at the 6

  1. A simple optical index shows spatial and temporal heterogeneity in phytoplankton community composition during the 2008 North Atlantic Bloom Experiment

    NASA Astrophysics Data System (ADS)

    Cetinić, I.; Perry, M. J.; D'Asaro, E.; Briggs, N.; Poulton, N.; Sieracki, M. E.; Lee, C. M.

    2015-04-01

    The ratio of two in situ optical measurements - chlorophyll fluorescence (Chl F) and optical particulate backscattering (bbp) - varied with changes in phytoplankton community composition during the North Atlantic Bloom Experiment in the Iceland Basin in 2008. Using ship-based measurements of Chl F, bbp, chlorophyll a (Chl), high-performance liquid chromatography (HPLC) pigments, phytoplankton composition and carbon biomass, we found that oscillations in the ratio varied with changes in plankton community composition; hence we refer to Chl F/bbp as an "optical community index". The index varied by more than a factor of 2, with low values associated with pico- and nanophytoplankton and high values associated with diatom-dominated phytoplankton communities. Observed changes in the optical index were driven by taxa-specific chlorophyll-to-autotrophic carbon ratios and by physiological changes in Chl F associated with the silica limitation. A Lagrangian mixed-layer float and four Seagliders, operating continuously for 2 months, made similar measurements of the optical community index and followed the evolution and later demise of the diatom spring bloom. Temporal changes in optical community index and, by implication, the transition in community composition from diatom to post-diatom bloom communities were not simultaneous over the spatial domain surveyed by the ship, float and gliders. The ratio of simple optical properties measured from autonomous platforms, when carefully validated, provides a unique tool for studying phytoplankton patchiness on extended temporal scales and ecologically relevant spatial scales and should offer new insights into the processes regulating patchiness.

  2. The Majorana Demonstrator: Progress towards showing the feasibility of a 76Ge neutrinoless double-beta decay experiment

    SciTech Connect

    Finnerty, P.; Aguayo, Estanislao; Amman, M.; Avignone, Frank T.; Barabash, Alexander S.; Barton, P. J.; Beene, Jim; Bertrand, F.; Boswell, M.; Brudanin, V.; Busch, Matthew; Chan, Yuen-Dat; Christofferson, Cabot-Ann; Collar, J. I.; Combs, Dustin C.; Cooper, R. J.; Detwiler, Jason A.; Doe, P. J.; Efremenko, Yuri; Egorov, Viatcheslav; Ejiri, H.; Elliott, S. R.; Esterline, James H.; Fast, James E.; Fields, N.; Fraenkle, Florian; Galindo-Uribarri, A.; Gehman, Victor M.; Giovanetti, G. K.; Green, M.; Guiseppe, Vincente; Gusey, K.; Hallin, A. L.; Hazama, R.; Henning, Reyco; Hoppe, Eric W.; Horton, Mark; Howard, Stanley; Howe, M. A.; Johnson, R. A.; Keeter, K.; Kidd, M. F.; Knecht, A.; Kochetov, Oleg; Konovalov, S.; Kouzes, Richard T.; LaFerriere, Brian D.; Leon, Jonathan D.; Leviner, L.; Loach, J. C.; Looker, Q.; Luke, P.; MacMullin, S.; Marino, Michael G.; Martin, R. D.; Merriman, Jason H.; Miller, M. L.; Mizouni, Leila; Nomachi, Masaharu; Orrell, John L.; Overman, Nicole R.; Perumpilly, Gopakumar; Phillips, David; Poon, Alan; Radford, D. C.; Rielage, Keith; Robertson, R. G. H.; Ronquest, M. C.; Schubert, Alexis G.; Shima, T.; Shirchenko, M.; Snavely, Kyle J.; Steele, David; Strain, J.; Timkin, V.; Tornow, Werner; Varner, R. L.; Vetter, Kai; Vorren, Kris R.; Wilkerson, J. F.; Yakushev, E.; Yaver, Harold; Young, A.; Yu, Chang-Hong; Yumatov, Vladimir

    2014-03-24

    The Majorana Demonstrator will search for the neutrinoless double-beta decay (0*) of the 76Ge isotope with a mixed array of enriched and natural germanium detectors. The observation of this rare decay would indicate the neutrino is its own anti-particle, demonstrate that lepton number is not conserved, and provide information on the absolute mass-scale of the neutrino. The Demonstrator is being assembled at the 4850 foot level of the Sanford Underground Research Facility in Lead, South Dakota. The array will be contained in a lowbackground environment and surrounded by passive and active shielding. The goals for the Demonstrator are: demonstrating a background rate less than 3 counts tonne -1 year-1 in the 4 keV region of interest (ROI) surrounding the 2039 keV 76Ge endpoint energy; establishing the technology required to build a tonne-scale germanium based double-beta decay experiment; testing the recent claim of observation of 0; and performing a direct search for lightWIMPs (3-10 GeV/c2).

  3. Evaluation of several MS/MS search algorithms for analysis of spectra derived from electron transfer dissociation experiments.

    PubMed

    Kandasamy, Kumaran; Pandey, Akhilesh; Molina, Henrik

    2009-09-01

    Electron transfer dissociation (ETD) is increasingly becoming popular for high-throughput experiments especially in the identification of the labile post-translational modifications. Most search algorithms that are currently in use for querying MS/MS data against protein databases have been optimized on the basis of matching fragment ions derived from collision induced dissociation of peptides, which are dominated by b and y ions. However, electron transfer dissociation of peptides generates completely different types of fragments: c and z ions. The goal of our study was to test the ability of different search algorithms to handle data from this fragmentation method. We compared four MS/MS search algorithms (OMSSA, Mascot, Spectrum Mill, and X!Tandem) using approximately 170,000 spectra generated from a standard protein mix, as well as from complex proteomic samples which included a large number of phosphopeptides. Our analysis revealed (1) greater differences between algorithms than has been previously reported for CID data, (2) a significant charge state bias resulting in >60-fold difference in the numbers of matched doubly charged peptides, and (3) identification of 70% more peptides by the best performing algorithm than the algorithm identifying the least number of peptides. Our results indicate that the search engines for analyzing ETD derived MS/MS spectra are still in their early days and that multiple search engines could be used to reduce individual biases of algorithms.

  4. [Experience of medical backup of military parade on Red Square. Authors showed data about medical backup of military personnel taking part in the parade on Red Square dedicating to anniversary of victory in the Great Patriotic War].

    PubMed

    Malykh, A B; Iakovlev, S V; Valevskiĭ, V V

    2014-03-01

    Authors showed data about medical backup of military personnel taking part in the parade on Red Square dedicating to anniversary of Victory in the Great Patriotic War. Experience of running such events allowed to work out an algorithm for medical service: preparatory stage, training stage, running of parade, stage of move out to permanent base. During the parade on Red Square for medical care asked 18 people (participants of parade and civilians). Authors came to conclusion that as a result of medical backup of military personnel taking part in the parade no infectious and group diseases were registered.

  5. Performance of the reconstruction algorithms of the FIRST experiment pixel sensors vertex detector

    NASA Astrophysics Data System (ADS)

    Rescigno, R.; Finck, Ch.; Juliani, D.; Spiriti, E.; Baudot, J.; Abou-Haidar, Z.; Agodi, C.; Alvarez, M. A. G.; Aumann, T.; Battistoni, G.; Bocci, A.; Böhlen, T. T.; Boudard, A.; Brunetti, A.; Carpinelli, M.; Cirrone, G. A. P.; Cortes-Giraldo, M. A.; Cuttone, G.; De Napoli, M.; Durante, M.; Gallardo, M. I.; Golosio, B.; Iarocci, E.; Iazzi, F.; Ickert, G.; Introzzi, R.; Krimmer, J.; Kurz, N.; Labalme, M.; Leifels, Y.; Le Fevre, A.; Leray, S.; Marchetto, F.; Monaco, V.; Morone, M. C.; Oliva, P.; Paoloni, A.; Patera, V.; Piersanti, L.; Pleskac, R.; Quesada, J. M.; Randazzo, N.; Romano, F.; Rossi, D.; Rousseau, M.; Sacchi, R.; Sala, P.; Sarti, A.; Scheidenberger, C.; Schuy, C.; Sciubba, A.; Sfienti, C.; Simon, H.; Sipala, V.; Tropea, S.; Vanstalle, M.; Younis, H.

    2014-12-01

    Hadrontherapy treatments use charged particles (e.g. protons and carbon ions) to treat tumors. During a therapeutic treatment with carbon ions, the beam undergoes nuclear fragmentation processes giving rise to significant yields of secondary charged particles. An accurate prediction of these production rates is necessary to estimate precisely the dose deposited into the tumours and the surrounding healthy tissues. Nowadays, a limited set of double differential carbon fragmentation cross-section is available. Experimental data are necessary to benchmark Monte Carlo simulations for their use in hadrontherapy. The purpose of the FIRST experiment is to study nuclear fragmentation processes of ions with kinetic energy in the range from 100 to 1000 MeV/u. Tracks are reconstructed using information from a pixel silicon detector based on the CMOS technology. The performances achieved using this device for hadrontherapy purpose are discussed. For each reconstruction step (clustering, tracking and vertexing), different methods are implemented. The algorithm performances and the accuracy on reconstructed observables are evaluated on the basis of simulated and experimental data.

  6. Biology, the way it should have been, experiments with a Lamarckian algorithm

    SciTech Connect

    Brown, F.M.; Snider, J.

    1996-12-31

    This paper investigates the case where some information can be extracted directly from the fitness function of a genetic algorithm so that mutation may be achieved essentially on the Lamarckian principle of acquired characteristics. The basic rationale is that such additional information will provide better mutations, thus speeding up the search process. Comparisons are made between a pure Neo-Darwinian genetic algorithm and this Lamarckian algorithm on a number of problems, including a problem of interest to the US Army.

  7. Tracking at CDF: algorithms and experience from Run I and Run II

    SciTech Connect

    Snider, F.D.; /Fermilab

    2005-10-01

    The authors describe the tracking algorithms used during Run I and Run II by CDF at the Fermilab Tevatron Collider, covering the time from about 1992 through the present, and discuss the performance of the algorithms at high luminosity. By tracing the evolution of the detectors and algorithms, they reveal some of the successful strategies used by CDF to address the problems of tracking at high luminosities.

  8. Thermal weapon sights with integrated fire control computers: algorithms and experiences

    NASA Astrophysics Data System (ADS)

    Rothe, Hendrik; Graswald, Markus; Breiter, Rainer

    2008-04-01

    The HuntIR long range thermal weapon sight of AIM is deployed in various out of area missions since 2004 as a part of the German Future Infantryman system (IdZ). In 2007 AIM fielded RangIR as upgrade with integrated laser Range finder (LRF), digital magnetic compass (DMC) and fire control unit (FCU). RangIR fills the capability gaps of day/night fire control for grenade machine guns (GMG) and the enhanced system of the IdZ. Due to proven expertise and proprietary methods in fire control, fast access to military trials for optimisation loops and similar hardware platforms, AIM and the University of the Federal Armed Forces Hamburg (HSU) decided to team for the development of suitable fire control algorithms. The pronounced ballistic trajectory of the 40mm GMG requires most accurate FCU-solutions specifically for air burst ammunition (ABM) and is most sensitive to faint effects like levelling or firing up/downhill. This weapon was therefore selected to validate the quality of the FCU hard- and software under relevant military conditions. For exterior ballistics the modified point mass model according to STANAG 4355 is used. The differential equations of motions are solved numerically, the two point boundary value problem is solved iteratively. Computing time varies according to the precision needed and is typical in the range from 0.1 - 0.5 seconds. RangIR provided outstanding hit accuracy including ABM fuze timing in various trials of the German Army and allied partners in 2007 and is now ready for series production. This paper deals mainly with the fundamentals of the fire control algorithms and shows how to implement them in combination with any DSP-equipped thermal weapon sights (TWS) in a variety of light supporting weapon systems.

  9. An Object-Oriented Collection of Minimum Degree Algorithms: Design, Implementation, and Experiences

    NASA Technical Reports Server (NTRS)

    Kumfert, Gary; Pothen, Alex

    1999-01-01

    The multiple minimum degree (MMD) algorithm and its variants have enjoyed 20+ years of research and progress in generating fill-reducing orderings for sparse, symmetric positive definite matrices. Although conceptually simple, efficient implementations of these algorithms are deceptively complex and highly specialized. In this case study, we present an object-oriented library that implements several recent minimum degree-like algorithms. We discuss how object-oriented design forces us to decompose these algorithms in a different manner than earlier codes and demonstrate how this impacts the flexibility and efficiency of our C++ implementation. We compare the performance of our code against other implementations in C or Fortran.

  10. A modified algorithm for continuous wave near infrared spectroscopy applied to in-vivo animal experiments and on human skin

    NASA Astrophysics Data System (ADS)

    Klaessens, John H. G. M.; Hopman, Jeroen C. W.; Liem, K. Djien; de Roode, Rowland; Verdaasdonk, Rudolf M.; Thijssen, Johan M.

    2008-02-01

    Continuous wave Near Infrared Spectroscopy is a well known non invasive technique for measuring changes in tissue oxygenation. Absorption changes (ΔO2Hb and ΔHHb) are calculated from the light attenuations using the modified Lambert Beer equation. Generally, the concentration changes are calculated relative to the concentration at a starting point in time (delta time method). It is also possible, under certain assumptions, to calculate the concentrations by subtracting the equations at different wavelengths (delta wavelength method). We derived a new algorithm and will show the possibilities and limitations. In the delta wavelength method, the assumption is that the oxygen independent attenuation term will be eliminated from the formula even if its value changes in time, we verified the results with the classical delta time method using extinction coefficients from different literature sources for the wavelengths 767nm, 850nm and 905nm. The different methods of calculating concentration changes were applied to the data collected from animal experiments. The animals (lambs) were in a stable normoxic condition; stepwise they were made hypoxic and thereafter they returned to normoxic condition. The two algorithms were also applied for measuring two dimensional blood oxygen saturation changes in human skin tissue. The different oxygen saturation levels were induced by alterations in the respiration and by temporary arm clamping. The new delta wavelength method yielded in a steady state measurement the same changes in oxy and deoxy hemoglobin as the classical delta time method. The advantage of the new method is the independence of eventual variation of the oxygen independent attenuations in time.

  11. Inversion algorithms for the microwave remote sensing of soil moisture. Experiments with swept frequency microwaves

    NASA Technical Reports Server (NTRS)

    Hancock, G. D.; Waite, W. P.

    1984-01-01

    Two experiments were performed employing swept frequency microwaves for the purpose of investigating the reflectivity from soil volumes containing both discontinuous and continuous changes in subsurface soil moisture content. Discontinuous moisture profiles were artificially created in the laboratory while continuous moisture profiles were induced into the soil of test plots by the environment of an agricultural field. The reflectivity for both the laboratory and field experiments was measured using bi-static reflectometers operated over the frequency ranges of 1.0 to 2.0 GHz and 4.0 to 8.0 GHz. Reflectivity models that considered the discontinuous and continuous moisture profiles within the soil volume were developed and compared with the results of the experiments. This comparison shows good agreement between the smooth surface models and the measurements. In particular the comparison of the smooth surface multi-layer model for continuous moisture profiles and the yield experiment measurements points out the sensitivity of the specular component of the scattered electromagnetic energy to the movement of moisture in the soil.

  12. EUV SpectroPhotometer (ESP) in Extreme Ultraviolet Variability Experiment (EVE): Algorithms and Calibrations

    NASA Astrophysics Data System (ADS)

    Didkovsky, L.; Judge, D.; Wieman, S.; Woods, T.; Jones, A.

    2012-01-01

    The Extreme ultraviolet SpectroPhotometer (ESP) is one of five channels of the Extreme ultraviolet Variability Experiment (EVE) onboard the NASA Solar Dynamics Observatory (SDO). The ESP channel design is based on a highly stable diffraction transmission grating and is an advanced version of the Solar Extreme ultraviolet Monitor (SEM), which has been successfully observing solar irradiance onboard the Solar and Heliospheric Observatory (SOHO) since December 1995. ESP is designed to measure solar Extreme UltraViolet (EUV) irradiance in four first-order bands of the diffraction grating centered around 19 nm, 25 nm, 30 nm, and 36 nm, and in a soft X-ray band from 0.1 to 7.0 nm in the zeroth-order of the grating. Each band’s detector system converts the photo-current into a count rate (frequency). The count rates are integrated over 0.25-second increments and transmitted to the EVE Science and Operations Center for data processing. An algorithm for converting the measured count rates into solar irradiance and the ESP calibration parameters are described. The ESP pre-flight calibration was performed at the Synchrotron Ultraviolet Radiation Facility of the National Institute of Standards and Technology. Calibration parameters were used to calculate absolute solar irradiance from the sounding-rocket flight measurements on 14 April 2008. These irradiances for the ESP bands closely match the irradiance determined for two other EUV channels flown simultaneously: EVE’s Multiple EUV Grating Spectrograph (MEGS) and SOHO’s Charge, Element and Isotope Analysis System/ Solar EUV Monitor (CELIAS/SEM).

  13. 2D-pattern matching image and video compression: theory, algorithms, and experiments.

    PubMed

    Alzina, Marc; Szpankowski, Wojciech; Grama, Ananth

    2002-01-01

    In this paper, we propose a lossy data compression framework based on an approximate two-dimensional (2D) pattern matching (2D-PMC) extension of the Lempel-Ziv (1977, 1978) lossless scheme. This framework forms the basis upon which higher level schemes relying on differential coding, frequency domain techniques, prediction, and other methods can be built. We apply our pattern matching framework to image and video compression and report on theoretical and experimental results. Theoretically, we show that the fixed database model used for video compression leads to suboptimal but computationally efficient performance. The compression ratio of this model is shown to tend to the generalized entropy. For image compression, we use a growing database model for which we provide an approximate analysis. The implementation of 2D-PMC is a challenging problem from the algorithmic point of view. We use a range of techniques and data structures such as k-d trees, generalized run length coding, adaptive arithmetic coding, and variable and adaptive maximum distortion level to achieve good compression ratios at high compression speeds. We demonstrate bit rates in the range of 0.25-0.5 bpp for high-quality images and data rates in the range of 0.15-0.5 Mbps for a baseline video compression scheme that does not use any prediction or interpolation. We also demonstrate that this asymmetric compression scheme is capable of extremely fast decompression making it particularly suitable for networked multimedia applications.

  14. Good-Enough Brain Model: Challenges, Algorithms, and Discoveries in Multisubject Experiments.

    PubMed

    Papalexakis, Evangelos E; Fyshe, Alona; Sidiropoulos, Nicholas D; Talukdar, Partha Pratim; Mitchell, Tom M; Faloutsos, Christos

    2014-12-01

    Given a simple noun such as apple, and a question such as "Is it edible?," what processes take place in the human brain? More specifically, given the stimulus, what are the interactions between (groups of) neurons (also known as functional connectivity) and how can we automatically infer those interactions, given measurements of the brain activity? Furthermore, how does this connectivity differ across different human subjects? In this work, we show that this problem, even though originating from the field of neuroscience, can benefit from big data techniques; we present a simple, novel good-enough brain model, or GeBM in short, and a novel algorithm Sparse-SysId, which are able to effectively model the dynamics of the neuron interactions and infer the functional connectivity. Moreover, GeBM is able to simulate basic psychological phenomena such as habituation and priming (whose definition we provide in the main text). We evaluate GeBM by using real brain data. GeBM produces brain activity patterns that are strikingly similar to the real ones, where the inferred functional connectivity is able to provide neuroscientific insights toward a better understanding of the way that neurons interact with each other, as well as detect regularities and outliers in multisubject brain activity measurements.

  15. Parallel algorithms for unconstrained optimizations by multisplitting

    SciTech Connect

    He, Qing

    1994-12-31

    In this paper a new parallel iterative algorithm for unconstrained optimization using the idea of multisplitting is proposed. This algorithm uses the existing sequential algorithms without any parallelization. Some convergence and numerical results for this algorithm are presented. The experiments are performed on an Intel iPSC/860 Hyper Cube with 64 nodes. It is interesting that the sequential implementation on one node shows that if the problem is split properly, the algorithm converges much faster than one without splitting.

  16. "The Show"

    ERIC Educational Resources Information Center

    Gehring, John

    2004-01-01

    For the past 16 years, the blue-collar city of Huntington, West Virginia, has rolled out the red carpet to welcome young wrestlers and their families as old friends. They have come to town chasing the same dream for a spot in what many of them call "The Show". For three days, under the lights of an arena packed with 5,000 fans, the…

  17. Localization of short-range acoustic and seismic wideband sources: Algorithms and experiments

    NASA Astrophysics Data System (ADS)

    Stafsudd, J. Z.; Asgari, S.; Hudson, R.; Yao, K.; Taciroglu, E.

    2008-04-01

    We consider the determination of the location (source localization) of a disturbance source which emits acoustic and/or seismic signals. We devise an enhanced approximate maximum-likelihood (AML) algorithm to process data collected at acoustic sensors (microphones) belonging to an array of, non-collocated but otherwise identical, sensors. The approximate maximum-likelihood algorithm exploits the time-delay-of-arrival of acoustic signals at different sensors, and yields the source location. For processing the seismic signals, we investigate two distinct algorithms, both of which process data collected at a single measurement station comprising a triaxial accelerometer, to determine direction-of-arrival. The direction-of-arrivals determined at each sensor station are then combined using a weighted least-squares approach for source localization. The first of the direction-of-arrival estimation algorithms is based on the spectral decomposition of the covariance matrix, while the second is based on surface wave analysis. Both of the seismic source localization algorithms have their roots in seismology; and covariance matrix analysis had been successfully employed in applications where the source and the sensors (array) are typically separated by planetary distances (i.e., hundreds to thousands of kilometers). Here, we focus on very-short distances (e.g., less than one hundred meters) instead, with an outlook to applications in multi-modal surveillance, including target detection, tracking, and zone intrusion. We demonstrate the utility of the aforementioned algorithms through a series of open-field tests wherein we successfully localize wideband acoustic and/or seismic sources. We also investigate a basic strategy for fusion of results yielded by acoustic and seismic arrays.

  18. Validation of the ORA spatial inversion algorithm with respect to the Stratospheric Aerosol and Gas Experiment II data.

    PubMed

    Fussen, D; Arijs, E; Nevejans, D; Van Hellemont, F; Brogniez, C; Lenoble, J

    1998-05-20

    We present the results of a comparison of the total extinction altitude profiles measured at the same time and at same location by the ORA (Occultation Radiometer) and Stratospheric Aerosol and Gas Experiment II solar occultation experiments at three different wavelengths. A series of 25 events for which the grazing points of both experiments lie within a 2 degrees window has been analyzed. The mean relative differences observed over the altitude range 15-45 km are -8.4%, 1.6%, and 3% for the three channels (0.385, 0.6, and 1.02 microm). Some systematic degradation occurs below 20 km (as the result of signal saturation and possible cloud interference) and above 40 km (low absorption). The fair general agreement between the extinction profiles obtained by two different instruments enhances our confidence in the results of the ORA experiment and of the recently developed vertical inversion algorithm applied to real data.

  19. Simulation and experiment research of face recognition with modified multi-method morphological correlation algorithm

    NASA Astrophysics Data System (ADS)

    Yang, Yu; Xuping, Zhang

    2007-03-01

    Morphological definition of similarity degree of gray-scale image and general definition of morphological correlation (GMC) are proposed. Hardware and software design for a compact joint transform correlator are presented in order to implement GMC. Two kinds of modified general morphological correlation algorithm are proposed. The gray-scale image is decomposed into a set of binary image slices in certain decomposition method. In the first algorithm, the edge of each binary joint image slice is detected, width adjustability of which is investigated, and the joint power spectrum of the edge is summed. In the second algorithm, the joint power spectrum of each pair is binarized or thinned and then summed in one situation, and the summation of the joint power spectrums of these pairs is binarized or thinned in the other situation. Computer-simulation results and real face image recognition results indicate that the modified algorithm can improve the discrimination capabilities with respect to the gray-scale face images of high similarity.

  20. Experiences with serial and parallel algorithms for channel routing using simulated annealing

    NASA Technical Reports Server (NTRS)

    Brouwer, Randall Jay

    1988-01-01

    Two algorithms for channel routing using simulated annealing are presented. Simulated annealing is an optimization methodology which allows the solution process to back up out of local minima that may be encountered by inappropriate selections. By properly controlling the annealing process, it is very likely that the optimal solution to an NP-complete problem such as channel routing may be found. The algorithm presented proposes very relaxed restrictions on the types of allowable transformations, including overlapping nets. By freeing that restriction and controlling overlap situations with an appropriate cost function, the algorithm becomes very flexible and can be applied to many extensions of channel routing. The selection of the transformation utilizes a number of heuristics, still retaining the pseudorandom nature of simulated annealing. The algorithm was implemented as a serial program for a workstation, and a parallel program designed for a hypercube computer. The details of the serial implementation are presented, including many of the heuristics used and some of the resulting solutions.

  1. Using the Metropolis Algorithm to Calculate Thermodynamic Quantities: An Undergraduate Computational Experiment

    ERIC Educational Resources Information Center

    Beddard, Godfrey S.

    2011-01-01

    Thermodynamic quantities such as the average energy, heat capacity, and entropy are calculated using a Monte Carlo method based on the Metropolis algorithm. This method is illustrated with reference to the harmonic oscillator but is particularly useful when the partition function cannot be evaluated; an example using a one-dimensional spin system…

  2. Experiments in Discourse Analysis Impact on Information Classification and Retrieval Algorithms.

    ERIC Educational Resources Information Center

    Morato, Jorge; Llorens, J.; Genova, G.; Moreiro, J. A.

    2003-01-01

    Discusses the inclusion of contextual information in indexing and retrieval systems to improve results and the ability to carry out text analysis by means of linguistic knowledge. Presents research that investigated whether discourse variables have an impact on information and retrieval and classification algorithms. (Author/LRW)

  3. MINLITH—an experience-based algorithm for estimating the likely mineralogical compositions of sedimentary rocks from bulk chemical analyses

    NASA Astrophysics Data System (ADS)

    Rosen, Oleg M.; Abbyasov, Ali A.; Tipper, John C.

    2004-07-01

    The MINLITH algorithm is a tool for estimating the likely mineralogical compositions of sedimentary rocks, using information from bulk chemical analyses. It is an experience-based algorithm that represents compositions in terms of a simplified set of normative minerals. MINLITH has been designed to be applied principally to mature sedimentary rocks, but it can (with care) be applied also to immature sediments and to metasedimentary rocks; the compositions that MINLITH gives for metasedimentary rocks are approximations to the original (i.e. pre-metamorphic) mineralogical compositions. The experience base on which MINLITH is built is a collection of 600 reference samples of sedimentary rocks. The compositional regularities found in these samples have allowed empirical rules to be developed to predict how the oxides reported in a bulk chemical analysis should be partitioned among the minerals most likely to be present. The discrepancies between MINLITH-estimated compositions and physically determined modal compositions are relatively small for the most widespread types of mature sedimentary rocks; they are comparable in their magnitude to the discrepancies associated with other methods for estimating mineralogical compositions from bulk chemical analyses, and to the discrepancies associated with quantitative X-ray diffractometry. The MINLITH algorithm is of particular value: (1) for providing preliminary estimates of mineralogical composition, prior to precise modal analysis; (2) for identifying systematic compositional variation within suites of samples; (3) in generalised sample classification; (4) in the sedimentological interpretation of metasedimentary rocks.

  4. Multi-Pivot Quicksort: an Experiment with Single, Dual, Triple, Quad, and Penta-Pivot Quicksort Algorithms in Python

    NASA Astrophysics Data System (ADS)

    Budiman, M. A.; Zamzami, E. M.; Rachmawati, D.

    2017-03-01

    Dual-pivot quicksort, which was proposed by Yaroslavsky, has been experimentally proven to be more efficient than the classical single-pivot quicksort under the Java Virtual Machine [6]. Moreover, Kushagara, López-Ortiz, and Munro [4] has shown that triple-pivot quicksort runs 7-8% faster than dual-pivot quicksort in C, mutatis mutandis. In this research, we implement and experiment with single, dual, triple, quad, and penta-pivot quicksort algorithms in Python. Our experimental results are as follows. Firstly, the quicksort with single pivot is the slowest among the five variants. Secondly, at least until five (penta) pivots are being used, it is proven that the more pivots are used in a quicksort algorithm, the faster its performance becomes. Thirdly, the increase of speed resulted by adding more pivots tends to decrease gradually.

  5. Space-based Doppler lidar sampling strategies: Algorithm development and simulated observation experiments

    NASA Technical Reports Server (NTRS)

    Emmitt, G. D.; Wood, S. A.; Morris, M.

    1990-01-01

    Lidar Atmospheric Wind Sounder (LAWS) Simulation Models (LSM) were developed to evaluate the potential impact of global wind observations on the basic understanding of the Earth's atmosphere and on the predictive skills of current forecast models (GCM and regional scale). Fully integrated top to bottom LAWS Simulation Models for global and regional scale simulations were developed. The algorithm development incorporated the effects of aerosols, water vapor, clouds, terrain, and atmospheric turbulence into the models. Other additions include a new satellite orbiter, signal processor, line of sight uncertainty model, new Multi-Paired Algorithm and wind error analysis code. An atmospheric wind field library containing control fields, meteorological fields, phenomena fields, and new European Center for Medium Range Weather Forecasting (ECMWF) data was also added. The LSM was used to address some key LAWS issues and trades such as accuracy and interpretation of LAWS information, data density, signal strength, cloud obscuration, and temporal data resolution.

  6. Pre-Mrna Introns as a Model for Cryptographic Algorithm:. Theory and Experiments

    NASA Astrophysics Data System (ADS)

    Regoli, Massimo

    2010-01-01

    The RNA-Crypto System (shortly RCS) is a symmetric key algorithm to cipher data. The idea for this new algorithm starts from the observation of nature. In particular from the observation of RNA behavior and some of its properties. In particular the RNA sequences have some sections called Introns. Introns, derived from the term "intragenic regions", are non-coding sections of precursor mRNA (pre-mRNA) or other RNAs, that are removed (spliced out of the RNA) before the mature RNA is formed. Once the introns have been spliced out of a pre-mRNA, the resulting mRNA sequence is ready to be translated into a protein. The corresponding parts of a gene are known as introns as well. The nature and the role of Introns in the pre-mRNA is not clear and it is under ponderous researches by Biologists but, in our case, we will use the presence of Introns in the RNA-Crypto System output as a strong method to add chaotic non coding information and an unnecessary behaviour in the access to the secret key to code the messages. In the RNA-Crypto System algorithm the introns are sections of the ciphered message with non-coding information as well as in the precursor mRNA.

  7. Numerical experience with a class of algorithms for nonlinear optimization using inexact function and gradient information

    NASA Technical Reports Server (NTRS)

    Carter, Richard G.

    1989-01-01

    For optimization problems associated with engineering design, parameter estimation, image reconstruction, and other optimization/simulation applications, low accuracy function and gradient values are frequently much less expensive to obtain than high accuracy values. Here, researchers investigate the computational performance of trust region methods for nonlinear optimization when high accuracy evaluations are unavailable or prohibitively expensive, and confirm earlier theoretical predictions when the algorithm is convergent even with relative gradient errors of 0.5 or more. The proper choice of the amount of accuracy to use in function and gradient evaluations can result in orders-of-magnitude savings in computational cost.

  8. Automated decision algorithm applied to a field experiment with multiple research objectives: The DC3 campaign

    NASA Astrophysics Data System (ADS)

    Hanlon, Christopher J.; Small, Arthur A.; Bose, Satyajit; Young, George S.; Verlinde, Johannes

    2014-10-01

    Automated decision systems have shown the potential to increase data yields from field experiments in atmospheric science. The present paper describes the construction and performance of a flight decision system designed for a case in which investigators pursued multiple, potentially competing objectives. The Deep Convective Clouds and Chemistry (DC3) campaign in 2012 sought in situ airborne measurements of isolated deep convection in three study regions: northeast Colorado, north Alabama, and a larger region extending from central Oklahoma through northwest Texas. As they confronted daily flight launch decisions, campaign investigators sought to achieve two mission objectives that stood in potential tension to each other: to maximize the total amount of data collected while also collecting approximately equal amounts of data from each of the three study regions. Creating an automated decision system involved understanding how investigators would themselves negotiate the trade-offs between these potentially competing goals, and representing those preferences formally using a utility function that served to rank-order the perceived value of alternative data portfolios. The decision system incorporated a custom-built method for generating probabilistic forecasts of isolated deep convection and estimated climatologies calibrated to historical observations. Monte Carlo simulations of alternative future conditions were used to generate flight decision recommendations dynamically consistent with the expected future progress of the campaign. Results show that a strict adherence to the recommendations generated by the automated system would have boosted the data yield of the campaign by between 10 and 57%, depending on the metrics used to score success, while improving portfolio balance.

  9. All Seasons Vertical Augmentation Mastopexy: A Simple Algorithm, Clinical Experience, and Patient-reported Outcomes

    PubMed Central

    2016-01-01

    Background: The safety of augmentation mastopexy has been questioned. Staging has been recommended for women deemed to be at higher risk, such as women with greater degrees of ptosis. Most existing studies evaluate women treated with multiple methods, including the traditional Wise pattern. This retrospective study specifically evaluates vertical augmentation mastopexy. A simple algorithm is introduced. Methods: From 2002 to 2016, 252 women underwent consecutive vertical augmentation mastopexies performed by the author, with no staged surgery. All patients underwent a vertical mastopexy using a medially based pedicle and intraoperative nipple siting. A subset of women treated from 2012 to 2016 were surveyed to obtain outcome data; 90 patients (inclusion rate, 90%) participated. Results: The complication rate was 32.9%, including persistent ptosis, delayed wound healing, scar deformities, and asymmetry. There were no cases of nipple loss. An increased risk of complications was detected for smokers (P < 0.01), but not for combined procedures, secondary breast augmentations, or secondary mastopexies. The revision rate was 15.5%. Persistent nipple numbness was reported by 13.3% of respondents. Eighty percent of women were self-conscious about their breast appearance before surgery; 22% of respondents were self-conscious about their breasts after surgery. Seventy percent of respondents reported an improved quality of life, 94.4% would repeat the surgery, and 95.6% would recommend it. Conclusions: A simple algorithm may be used to guide treatment in women who desire correction of ptosis and upper pole fullness. An "all seasons" vertical augmentation mastopexy is safe and widely applicable. Staging is unnecessary. PMID:28293517

  10. Field experiments of Controlled Drainage of agricultural clay soils show positive effects on water quantity (retention, runoff) and water quality (nitrate leaching).

    NASA Astrophysics Data System (ADS)

    schipper, peter; stuyt, lodewijk; straat, van der, andre; schans, van der, martin

    2014-05-01

    processes in the soil have been modelled with simulation model SWAP. The experiment started in 2010 and is ongoing. Data, collected so far show that the plots with controlled drainage (all compared with plots equipped with conventional drainage) conserve more rain water (higher groundwater tables in early spring), lower discharges under average weather conditions and storm events, reduce N-loads and saline seepage to surface waters, enhance denitrification, show a different 'first flush' effect and show similar crop yields. The results of the experiments will contribute to a better understanding of the impact of controlled drainage on complex hydrological en geochemical processes in agricultural clay soils, the interaction between ground- en surface water and its effects on drain water quantity, quality and crop yield.

  11. Hardware Demonstrator of a Level-1 Track Finding Algorithm with FPGAs for the Phase II CMS Experiment

    NASA Astrophysics Data System (ADS)

    Cieri, D.; CMS Collaboration

    2016-10-01

    At the HL-LHC, proton bunches collide every 25 ns, producing an average of 140 pp interactions per bunch crossing. To operate in such an environment, the CMS experiment will need a Level-1 (L1) hardware trigger, able to identify interesting events within a latency of 12.5 μs. This novel L1 trigger will make use of data coming from the silicon tracker to constrain the trigger rate. Goal of this new track trigger will be to build L1 tracks from the tracker information. The architecture that will be implemented in future to process tracker data is still under discussion. One possibility is to adopt a system entirely based on FPGA electronic. The proposed track finding algorithm is based on the Hough transform method. The algorithm has been tested using simulated pp collision data and it is currently being demonstrated in hardware, using the “MP7”, which is a μTCA board with a powerful FPGA capable of handling data rates approaching 1 Tb/s. Two different implementations of the Hough transform technique are currently under investigation: one utilizes a systolic array to represent the Hough space, while the other exploits a pipelined approach.

  12. Testing Nelder-Mead based repulsion algorithms for multiple roots of nonlinear systems via a two-level factorial design of experiments.

    PubMed

    Ramadas, Gisela C V; Rocha, Ana Maria A C; Fernandes, Edite M G P

    2015-01-01

    This paper addresses the challenging task of computing multiple roots of a system of nonlinear equations. A repulsion algorithm that invokes the Nelder-Mead (N-M) local search method and uses a penalty-type merit function based on the error function, known as 'erf', is presented. In the N-M algorithm context, different strategies are proposed to enhance the quality of the solutions and improve the overall efficiency. The main goal of this paper is to use a two-level factorial design of experiments to analyze the statistical significance of the observed differences in selected performance criteria produced when testing different strategies in the N-M based repulsion algorithm. The main goal of this paper is to use a two-level factorial design of experiments to analyze the statistical significance of the observed differences in selected performance criteria produced when testing different strategies in the N-M based repulsion algorithm.

  13. Application and experience of a two-dosimeter algorithm for better estimation of effective dose during maintenance periods at Korea nuclear power plants.

    PubMed

    Kim, Hee Geun; Kong, Tae Young

    2009-01-01

    The application of a two-dosimeter and its algorithm and a test of its use in an inhomogeneous high radiation field are described. The goal was to develop an improved method for estimating the effective dose during maintenance periods at Korean nuclear power plants (NPPs). The application and experience to KNPPs was evaluated using data for each algorithm from two-dosimeter results for an inhomogeneous high radiation field during maintenance periods at Korean NPPs.

  14. Algorithms and Algorithmic Languages.

    ERIC Educational Resources Information Center

    Veselov, V. M.; Koprov, V. M.

    This paper is intended as an introduction to a number of problems connected with the description of algorithms and algorithmic languages, particularly the syntaxes and semantics of algorithmic languages. The terms "letter, word, alphabet" are defined and described. The concept of the algorithm is defined and the relation between the algorithm and…

  15. Karst show caves - how DTN technology as used in space assists automatic environmental monitoring and tourist protection - experiment in Postojna Cave

    NASA Astrophysics Data System (ADS)

    Gabrovšek, F.; Grašič, B.; Božnar, M. Z.; Mlakar, P.; Udén, M.; Davies, E.

    2014-02-01

    The paper presents an experiment demonstrating a novel and successful application of delay- and disruption-tolerant networking (DTN) technology for automatic data transfer in a karst cave early warning and measuring system. The experiment took place inside the Postojna Cave in Slovenia, which is open to tourists. Several automatic meteorological measuring stations are set up inside the cave, as an adjunct to the surveillance infrastructure; the regular data transfer provided by the DTN technology allows the surveillance system to take on the role of an early warning system (EWS). One of the stations is set up alongside the railway tracks, which allows the tourist to travel inside the cave by train. The experiment was carried out by placing a DTN "data mule" (a DTN-enabled computer with WiFi connection) on the train and by upgrading the meteorological station with a DTN-enabled WiFi transmission system. When the data mule is in the wireless drive-by mode, it collects measurement data from the station over a period of several seconds as the train without stopping passes the stationary equipment, and delivers data at the final train station by the cave entrance. This paper describes an overview of the experimental equipment and organization allowing the use of a DTN system for data collection and an EWS inside karst caves where there is regular traffic of tourists and researchers.

  16. Karst show caves - how DTN technology as used in space assists automatic environmental monitoring and tourist protection - experiment in Postojna cave

    NASA Astrophysics Data System (ADS)

    Gabrovšek, F.; Grašič, B.; Božnar, M. Z.; Mlakar, P.; Udén, M.; Davies, E.

    2013-10-01

    The paper presents an experiment demonstrating a novel and successful application of Delay- and Disruption-Tolerant Networking (DTN) technology for automatic data transfer in a karst cave Early Warning and Measuring System. The experiment took place inside the Postojna Cave in Slovenia, which is open to tourists. Several automatic meteorological measuring stations are set up inside the cave, as an adjunct to the surveillance infrastructure; the regular data transfer provided by the DTN technology allows the surveillance system to take on the role of an Early Warning System (EWS). One of the stations is set up alongside the railway tracks, which allows the tourist to travel inside the cave by train. The experiment was carried out by placing a DTN "data mule" (a DTN-enabled computer with WiFi connection) on the train and by upgrading the meteorological station with a DTN-enabled WiFi transmission system. When the data mule is in the wireless drive-by mode, it collects measurement data from the station over a period of several seconds as the train passes the stationary equipment, and delivers data at the final train station by the cave entrance. This paper describes an overview of the experimental equipment and organisation allowing the use of a DTN system for data collection and an EWS inside karst caves where there is a regular traffic of tourists and researchers.

  17. Spaceborne SAR Imaging Algorithm for Coherence Optimized

    PubMed Central

    Qiu, Zhiwei; Yue, Jianping; Wang, Xueqin; Yue, Shun

    2016-01-01

    This paper proposes SAR imaging algorithm with largest coherence based on the existing SAR imaging algorithm. The basic idea of SAR imaging algorithm in imaging processing is that output signal can have maximum signal-to-noise ratio (SNR) by using the optimal imaging parameters. Traditional imaging algorithm can acquire the best focusing effect, but would bring the decoherence phenomenon in subsequent interference process. Algorithm proposed in this paper is that SAR echo adopts consistent imaging parameters in focusing processing. Although the SNR of the output signal is reduced slightly, their coherence is ensured greatly, and finally the interferogram with high quality is obtained. In this paper, two scenes of Envisat ASAR data in Zhangbei are employed to conduct experiment for this algorithm. Compared with the interferogram from the traditional algorithm, the results show that this algorithm is more suitable for SAR interferometry (InSAR) research and application. PMID:26871446

  18. Honored Teacher Shows Commitment.

    ERIC Educational Resources Information Center

    Ratte, Kathy

    1987-01-01

    Part of the acceptance speech of the 1985 National Council for the Social Studies Teacher of the Year, this article describes the censorship experience of this honored social studies teacher. The incident involved the showing of a videotape version of the feature film entitled "The Seduction of Joe Tynan." (JDH)

  19. '…it shows that there's no limits': the psychological impact of co-production for experts by experience working in youth mental health.

    PubMed

    Mayer, Claire; McKenzie, Karen

    2017-05-01

    Co-production is commonly conceptualised as a more equal sharing of power and decision-making between a dichotomy of service user and service provider, each bringing valuable and different assets to the process. Experts by experience lie in the overlap between this conceptually created duality, providing the services they now do by virtue of having once used services themselves. Previous related studies suggest that their involvement in co-production could impact positively on their social capital, self-esteem, self-efficacy and life skills. However, no studies have been explicitly psychological or phenomenological in nature, and the theoretical basis for such outcomes remains under-developed. This phenomenological study explored the psychological impact of co-production for young people who were paid experts by experience for a young person's mental health charity in a large and diverse urban area in the UK, looking at the what of psychological impact, as well as the theoretical why and how. Semi-structured interviews were conducted with a convenience sample of five males, with a mean age of 25 years. Interpretative phenomenological analysis yielded three master themes: the co-production approach, I'm a professional and identities in transition. Participants valued a collegiate organisational approach that prioritised empowerment, agency and equality between experts by experience and 'experts by qualification', leading to a positive impact on their self-efficacy and self-esteem. Co-production impacted fundamentally on their identity structure, enabling them to explore a new identity as a 'professional'. The results are framed within identity process theory and point to the potential benefits of this model to co-production.

  20. Evolving evolutionary algorithms using linear genetic programming.

    PubMed

    Oltean, Mihai

    2005-01-01

    A new model for evolving Evolutionary Algorithms is proposed in this paper. The model is based on the Linear Genetic Programming (LGP) technique. Every LGP chromosome encodes an EA which is used for solving a particular problem. Several Evolutionary Algorithms for function optimization, the Traveling Salesman Problem and the Quadratic Assignment Problem are evolved by using the considered model. Numerical experiments show that the evolved Evolutionary Algorithms perform similarly and sometimes even better than standard approaches for several well-known benchmarking problems.

  1. A Traffic Motion Object Extraction Algorithm

    NASA Astrophysics Data System (ADS)

    Wu, Shaofei

    2015-12-01

    A motion object extraction algorithm based on the active contour model is proposed. Firstly, moving areas involving shadows are segmented with the classical background difference algorithm. Secondly, performing shadow detection and coarse removal, then a grid method is used to extract initial contours. Finally, the active contour model approach is adopted to compute the contour of the real object by iteratively tuning the parameter of the model. Experiments show the algorithm can remove the shadow and keep the integrity of a moving object.

  2. Retrieval of ozone and nitrogen dioxide concentrations from Stratospheric Aerosol and Gas Experiment III (SAGE III) measurements using a new algorithm

    NASA Astrophysics Data System (ADS)

    Polyakov, A. V.; Timofeyev, Y. M.; Ionov, D. V.; Virolainen, Y. A.; Steele, H. M.; Newchurch, M. J.

    2005-03-01

    We describe a new inversion algorithm developed for the retrieval of atmospheric constituents from Stratospheric Aerosol and Gas Experiment III (SAGE III) solar occultation measurements. The methodology differs from the operational (NASA) algorithm in several important ways. Our algorithm takes account of the finite altitude and spectral resolution of the measurements by integrating over the viewing window spectrally and spatially. We solve the problem nonlinearly by using optimal estimation theory, and we use an aerosol parameterization scheme based on eigenvectors derived from existing empirical and modeled information about their microphysical properties. The first four of these eigenvectors are employed in the retrieval algorithm to describe the spectral variation of the aerosol extinction. We retrieve ozone and nitrogen dioxide number densities and aerosol extinction from transmission measurements at 41 channels from 0.29 to 1.55 μm. In this paper we describe the results of the gas retrievals. Numerical simulations test the accuracy of the scheme, and subsequent retrievals from SAGE III transmission data for the period between May and October 2002 produce profiles of O3 and NO2. Comparisons of the O3 and NO2 profiles with those obtained using the SAGE III operational algorithm and with those from independent measurements made by satellites, ozonesondes, and lidar indicate agreement in ozone measurements in the middle and upper stratosphere significantly closer than the natural variability and agreement in the lower stratosphere and upper troposphere approximately equal to the natural variability.

  3. Neutron-Rich {sup 62,64,64}Fe Show Enhanced Collectivity: The Washout of N = 40 in Terms of Experiment, Valence Proton Symmetry and Shell Model

    SciTech Connect

    Rother, W.; Dewald, A.; Fransen, C.; Hackstein, M.; Jolie, J.; Pissulla, Th.; Zel, K.-O.; Iwasaki, H.; Baugher, T.; Brown, B. A.; Gade, A.; Glasmacher, T.; McDaniel, S.; Ratkiewicz, A.; Voss, P.; Walsh, K. A.; Lenzi, S. M.; Ur, C. A.; Starosta, K.; Bazin, D.

    2011-10-28

    Probing shell structure at a large neutron excess has been of particular interest in recent times. Neutron-rich nuclei between the proton shell closures Z = 20 and Z = 28 offer an exotic testing ground for shell evolution. The development of the N = 40gap between neutron fp and lg{sub 9/2} shells gives rise to highly interesting variations of collectivity for nuclei in this region. While {sup 68}Ni shows doubly magic properties in level energies and transition strengths, this was not observed in neighbouring nuclei. Especially neutron-rich Fe isotopes proved particularly resistant to calculational approaches using the canonical valence space (fpg) resulting in important deviations of the predicted collectivity. Only an inclusion of the d{sub 5/2}-orbital could solve the problem [1]. Hitherto no transition strengths for {sup 66}Fe have been reported. We determined B(E2,2{sup +}{sub 1}{yields}0{sup +}{sub 1}) values from lifetimes measured with the recoil distance Doppler-shift method using the Cologne plunger for radioactive beams at National Superconducting Cyclotron Laboratory at Michigan State University. Excited states were populated by projectile Coulomb excitation for {sup 62,64,66}Fe. The data show a rise in collectivity for Fe isotopes towards N = 40. Results [2] are interpreted by means of a modified version of the Valence Proton Symmetry [3] and compared to shell model calculations using a new effective interaction recently developed for the fpgd valence space [4].

  4. Haplotyping algorithms

    SciTech Connect

    Sobel, E.; Lange, K.; O`Connell, J.R.

    1996-12-31

    Haplotyping is the logical process of inferring gene flow in a pedigree based on phenotyping results at a small number of genetic loci. This paper formalizes the haplotyping problem and suggests four algorithms for haplotype reconstruction. These algorithms range from exhaustive enumeration of all haplotype vectors to combinatorial optimization by simulated annealing. Application of the algorithms to published genetic analyses shows that manual haplotyping is often erroneous. Haplotyping is employed in screening pedigrees for phenotyping errors and in positional cloning of disease genes from conserved haplotypes in population isolates. 26 refs., 6 figs., 3 tabs.

  5. A Winner Determination Algorithm for Combinatorial Auctions Based on Hybrid Artificial Fish Swarm Algorithm

    NASA Astrophysics Data System (ADS)

    Zheng, Genrang; Lin, ZhengChun

    The problem of winner determination in combinatorial auctions is a hotspot electronic business, and a NP hard problem. A Hybrid Artificial Fish Swarm Algorithm(HAFSA), which is combined with First Suite Heuristic Algorithm (FSHA) and Artificial Fish Swarm Algorithm (AFSA), is proposed to solve the problem after probing it base on the theories of AFSA. Experiment results show that the HAFSA is a rapidly and efficient algorithm for The problem of winner determining. Compared with Ant colony Optimization Algorithm, it has a good performance with broad and prosperous application.

  6. Three list scheduling temporal partitioning algorithm of time space characteristic analysis and compare for dynamic reconfigurable computing

    NASA Astrophysics Data System (ADS)

    Chen, Naijin

    2013-03-01

    Level Based Partitioning (LBP) algorithm, Cluster Based Partitioning (CBP) algorithm and Enhance Static List (ESL) temporal partitioning algorithm based on adjacent matrix and adjacent table are designed and implemented in this paper. Also partitioning time and memory occupation based on three algorithms are compared. Experiment results show LBP partitioning algorithm possesses the least partitioning time and better parallel character, as far as memory occupation and partitioning time are concerned, algorithms based on adjacent table have less partitioning time and less space memory occupation.

  7. Approximate learning algorithm in Boltzmann machines.

    PubMed

    Yasuda, Muneki; Tanaka, Kazuyuki

    2009-11-01

    Boltzmann machines can be regarded as Markov random fields. For binary cases, they are equivalent to the Ising spin model in statistical mechanics. Learning systems in Boltzmann machines are one of the NP-hard problems. Thus, in general we have to use approximate methods to construct practical learning algorithms in this context. In this letter, we propose new and practical learning algorithms for Boltzmann machines by using the belief propagation algorithm and the linear response approximation, which are often referred as advanced mean field methods. Finally, we show the validity of our algorithm using numerical experiments.

  8. The global Minmax k-means algorithm.

    PubMed

    Wang, Xiaoyan; Bai, Yanping

    2016-01-01

    The global k-means algorithm is an incremental approach to clustering that dynamically adds one cluster center at a time through a deterministic global search procedure from suitable initial positions, and employs k-means to minimize the sum of the intra-cluster variances. However the global k-means algorithm sometimes results singleton clusters and the initial positions sometimes are bad, after a bad initialization, poor local optimal can be easily obtained by k-means algorithm. In this paper, we modified the global k-means algorithm to eliminate the singleton clusters at first, and then we apply MinMax k-means clustering error method to global k-means algorithm to overcome the effect of bad initialization, proposed the global Minmax k-means algorithm. The proposed clustering method is tested on some popular data sets and compared to the k-means algorithm, the global k-means algorithm and the MinMax k-means algorithm. The experiment results show our proposed algorithm outperforms other algorithms mentioned in the paper.

  9. Not a "reality" show.

    PubMed

    Wrong, Terence; Baumgart, Erica

    2013-01-01

    The authors of the preceding articles raise legitimate questions about patient and staff rights and the unintended consequences of allowing ABC News to film inside teaching hospitals. We explain why we regard their fears as baseless and not supported by what we heard from individuals portrayed in the filming, our decade-long experience making medical documentaries, and the full un-aired context of the scenes shown in the broadcast. The authors don't and can't know what conversations we had, what documents we reviewed, and what protections we put in place in each televised scene. Finally, we hope to correct several misleading examples cited by the authors as well as their offhand mischaracterization of our program as a "reality" show.

  10. A region labeling algorithm based on block

    NASA Astrophysics Data System (ADS)

    Wang, Jing

    2009-10-01

    The time performance of region labeling algorithm is important for image process. However, common region labeling algorithms cannot meet the requirements of real-time image processing. In this paper, a technique using block to record the connective area is proposed. By this technique, connective closure and information related to the target can be computed during a one-time image scan. It records the edge pixel's coordinate, including outer side edges and inner side edges, as well as the label, and then it can calculate connecting area's shape center, area and gray. Compared to others, this block based region labeling algorithm is more efficient. It can well meet the time requirements of real-time processing. Experiment results also validate the correctness and efficiency of the algorithm. Experiment results show that it can detect any connecting areas in binary images, which contains various complex and quaint patterns. The block labeling algorithm is used in a real-time image processing program now.

  11. A hybrid monkey search algorithm for clustering analysis.

    PubMed

    Chen, Xin; Zhou, Yongquan; Luo, Qifang

    2014-01-01

    Clustering is a popular data analysis and data mining technique. The k-means clustering algorithm is one of the most commonly used methods. However, it highly depends on the initial solution and is easy to fall into local optimum solution. In view of the disadvantages of the k-means method, this paper proposed a hybrid monkey algorithm based on search operator of artificial bee colony algorithm for clustering analysis and experiment on synthetic and real life datasets to show that the algorithm has a good performance than that of the basic monkey algorithm for clustering analysis.

  12. Selection of views to materialize using simulated annealing algorithms

    NASA Astrophysics Data System (ADS)

    Zhou, Lijuan; Liu, Chi; Wang, Hongfeng; Liu, Daixin

    2002-03-01

    A data warehouse contains lots of materialized views over the data provided by the distributed heterogeneous databases for the purpose of efficiently implementing decision-support or OLAP queries. It is important to select the right view to materialize that answer a given set of queries. The goal is the minimization of the combination of the query evaluation and view maintenance costs. In this paper, we have addressed and designed algorithms for selecting a set of views to be materialized so that the sum of processing a set of queries and maintaining the materialized views is minimized. We develop an approach using simulated annealing algorithms to solve it. First, we explore simulated annealing algorithms to optimize the selection of materialized views. Then we use experiments to demonstrate our approach. The results show that our algorithm works better. We implemented our algorithms and a performance study of the algorithms shows that the proposed algorithm gives an optimal solution.

  13. Permanent prostate implant using high activity seeds and inverse planning with fast simulated annealing algorithm: A 12-year Canadian experience

    SciTech Connect

    Martin, Andre-Guy; Roy, Jean; Beaulieu, Luc; Pouliot, Jean; Harel, Francois; Vigneault, Eric . E-mail: Eric.Vigneault@chuq.qc.ca

    2007-02-01

    Purpose: To report outcomes and toxicity of the first Canadian permanent prostate implant program. Methods and Materials: 396 consecutive patients (Gleason {<=}6, initial prostate specific antigen (PSA) {<=}10 and stage T1-T2a disease) were implanted between June 1994 and December 2001. The median follow-up is of 60 months (maximum, 136 months). All patients were planned with fast-simulated annealing inverse planning algorithm with high activity seeds ([gt] 0.76 U). Acute and late toxicity is reported for the first 213 patients using a modified RTOG toxicity scale. The Kaplan-Meier biochemical failure-free survival (bFFS) is reported according to the ASTRO and Houston definitions. Results: The bFFS at 60 months was of 88.5% (90.5%) according to the ASTRO (Houston) definition and, of 91.4% (94.6%) in the low risk group (initial PSA {<=}10 and Gleason {<=}6 and Stage {<=}T2a). Risk factors statistically associated with bFFS were: initial PSA >10, a Gleason score of 7-8, and stage T2b-T3. The mean D90 was of 151 {+-} 36.1 Gy. The mean V100 was of 85.4 {+-} 8.5% with a mean V150 of 60.1 {+-} 12.3%. Overall, the implants were well tolerated. In the first 6 months, 31.5% of the patients were free of genitourinary symptoms (GUs), 12.7% had Grade 3 GUs; 91.6% were free of gastrointestinal symptoms (GIs). After 6 months, 54.0% were GUs free, 1.4% had Grade 3 GUs; 95.8% were GIs free. Conclusion: The inverse planning with fast simulated annealing and high activity seeds gives a 5-year bFFS, which is comparable with the best published series with a low toxicity profile.

  14. Improved ant colony algorithm and its simulation study

    NASA Astrophysics Data System (ADS)

    Wang, Zongjiang

    2013-03-01

    Ant colony algorithm is development a new heuristic algorithm through simulation ant foraging. For its convergence rate slow, easy to fall into local optimal solution proposed for the adjustment of key parameters, pheromone update to improve the way and through the issue of TSP experiments, results showed that the improved algorithm has better overall search capabilities and demonstrated the feasibility and effectiveness of this method.

  15. A Markov Chain Monte Carlo Algorithm for Infrasound Atmospheric Sounding: Application to the Humming Roadrunner experiment in New Mexico

    NASA Astrophysics Data System (ADS)

    Lalande, Jean-Marie; Waxler, Roger; Velea, Doru

    2016-04-01

    As infrasonic waves propagate at long ranges through atmospheric ducts it has been suggested that observations of such waves can be used as a remote sensing techniques in order to update properties such as temperature and wind speed. In this study we investigate a new inverse approach based on Markov Chain Monte Carlo methods. This approach as the advantage of searching for the full Probability Density Function in the parameter space at a lower computational cost than extensive parameters search performed by the standard Monte Carlo approach. We apply this inverse methods to observations from the Humming Roadrunner experiment (New Mexico) and discuss implications for atmospheric updates, explosion characterization, localization and yield estimation.

  16. Gaussian maximum likelihood and contextual classification algorithms for multicrop classification experiments using thematic mapper and multispectral scanner sensor data

    NASA Technical Reports Server (NTRS)

    Di Zenzo, Silvano; Degloria, Stephen D.; Bernstein, R.; Kolsky, Harwood G.

    1987-01-01

    The paper presents the results of a four-factor two-level analysis of a variance experiment designed to evaluate the combined effect of the improved quality of remote-sensor data and the use of context by the classifier on classification accuracy. The improvement achievable by using the context via relaxation techniques is significantly smaller than that provided by an increase of the radiometric resolution of the sensor from 6 to 8 bits per sample (the relative increase in radiometric resolution of TM relative to MSS). It is almost equal to that achievable by an increase in the spectral coverage as provided by TM relative to MSS.

  17. Advancements to the planogram frequency–distance rebinning algorithm

    PubMed Central

    Champley, Kyle M; Raylman, Raymond R; Kinahan, Paul E

    2010-01-01

    In this paper we consider the task of image reconstruction in positron emission tomography (PET) with the planogram frequency–distance rebinning (PFDR) algorithm. The PFDR algorithm is a rebinning algorithm for PET systems with panel detectors. The algorithm is derived in the planogram coordinate system which is a native data format for PET systems with panel detectors. A rebinning algorithm averages over the redundant four-dimensional set of PET data to produce a three-dimensional set of data. Images can be reconstructed from this rebinned three-dimensional set of data. This process enables one to reconstruct PET images more quickly than reconstructing directly from the four-dimensional PET data. The PFDR algorithm is an approximate rebinning algorithm. We show that implementing the PFDR algorithm followed by the (ramp) filtered backprojection (FBP) algorithm in linogram coordinates from multiple views reconstructs a filtered version of our image. We develop an explicit formula for this filter which can be used to achieve exact reconstruction by means of a modified FBP algorithm applied to the stack of rebinned linograms and can also be used to quantify the errors introduced by the PFDR algorithm. This filter is similar to the filter in the planogram filtered backprojection algorithm derived by Brasse et al. The planogram filtered backprojection and exact reconstruction with the PFDR algorithm require complete projections which can be completed with a reprojection algorithm. The PFDR algorithm is similar to the rebinning algorithm developed by Kao et al. By expressing the PFDR algorithm in detector coordinates, we provide a comparative analysis between the two algorithms. Numerical experiments using both simulated data and measured data from a positron emission mammography/tomography (PEM/PET) system are performed. Images are reconstructed by PFDR+FBP (PFDR followed by 2D FBP reconstruction), PFDRX (PFDR followed by the modified FBP algorithm for exact

  18. Hey Teacher, Your Personality's Showing!

    ERIC Educational Resources Information Center

    Paulsen, James R.

    1977-01-01

    A study of 30 fourth, fifth, and sixth grade teachers and 300 of their students showed that a teacher's age, sex, and years of experience did not relate to students' mathematics achievement, but that more effective teachers showed greater "freedom from defensive behavior" than did less effective teachers. (DT)

  19. Improved hybrid optimization algorithm for 3D protein structure prediction.

    PubMed

    Zhou, Changjun; Hou, Caixia; Wei, Xiaopeng; Zhang, Qiang

    2014-07-01

    A new improved hybrid optimization algorithm - PGATS algorithm, which is based on toy off-lattice model, is presented for dealing with three-dimensional protein structure prediction problems. The algorithm combines the particle swarm optimization (PSO), genetic algorithm (GA), and tabu search (TS) algorithms. Otherwise, we also take some different improved strategies. The factor of stochastic disturbance is joined in the particle swarm optimization to improve the search ability; the operations of crossover and mutation that are in the genetic algorithm are changed to a kind of random liner method; at last tabu search algorithm is improved by appending a mutation operator. Through the combination of a variety of strategies and algorithms, the protein structure prediction (PSP) in a 3D off-lattice model is achieved. The PSP problem is an NP-hard problem, but the problem can be attributed to a global optimization problem of multi-extremum and multi-parameters. This is the theoretical principle of the hybrid optimization algorithm that is proposed in this paper. The algorithm combines local search and global search, which overcomes the shortcoming of a single algorithm, giving full play to the advantage of each algorithm. In the current universal standard sequences, Fibonacci sequences and real protein sequences are certified. Experiments show that the proposed new method outperforms single algorithms on the accuracy of calculating the protein sequence energy value, which is proved to be an effective way to predict the structure of proteins.

  20. Improved algorithm for hyperspectral data dimension determination

    NASA Astrophysics Data System (ADS)

    CHEN, Jie; DU, Lei; LI, Jing; HAN, Yachao; GAO, Zihong

    2017-02-01

    The correlation between adjacent bands of hyperspectral image data is relatively strong. However, signal coexists with noise and the HySime (hyperspectral signal identification by minimum error) algorithm which is based on the principle of least squares is designed to calculate the estimated noise value and the estimated signal correlation matrix value. The algorithm is effective with accurate noise value but ineffective with estimated noise value obtained from spectral dimension reduction and de-correlation process. This paper proposes an improved HySime algorithm based on noise whitening process. It carries out the noise whitening, instead of removing noise pixel by pixel, process on the original data first, obtains the noise covariance matrix estimated value accurately, and uses the HySime algorithm to calculate the signal correlation matrix value in order to improve the precision of results. With simulated as well as real data experiments in this paper, results show that: firstly, the improved HySime algorithm are more accurate and stable than the original HySime algorithm; secondly, the improved HySime algorithm results have better consistency under the different conditions compared with the classic noise subspace projection algorithm (NSP); finally, the improved HySime algorithm improves the adaptability of non-white image noise with noise whitening process.

  1. Television Quiz Show Simulation

    ERIC Educational Resources Information Center

    Hill, Jonnie Lynn

    2007-01-01

    This article explores the simulation of four television quiz shows for students in China studying English as a foreign language (EFL). It discusses the adaptation and implementation of television quiz shows and how the students reacted to them.

  2. Algorithms for automated DNA assembly

    PubMed Central

    Densmore, Douglas; Hsiau, Timothy H.-C.; Kittleson, Joshua T.; DeLoache, Will; Batten, Christopher; Anderson, J. Christopher

    2010-01-01

    Generating a defined set of genetic constructs within a large combinatorial space provides a powerful method for engineering novel biological functions. However, the process of assembling more than a few specific DNA sequences can be costly, time consuming and error prone. Even if a correct theoretical construction scheme is developed manually, it is likely to be suboptimal by any number of cost metrics. Modular, robust and formal approaches are needed for exploring these vast design spaces. By automating the design of DNA fabrication schemes using computational algorithms, we can eliminate human error while reducing redundant operations, thus minimizing the time and cost required for conducting biological engineering experiments. Here, we provide algorithms that optimize the simultaneous assembly of a collection of related DNA sequences. We compare our algorithms to an exhaustive search on a small synthetic dataset and our results show that our algorithms can quickly find an optimal solution. Comparison with random search approaches on two real-world datasets show that our algorithms can also quickly find lower-cost solutions for large datasets. PMID:20335162

  3. Efficient Grammar Induction Algorithm with Parse Forests from Real Corpora

    NASA Astrophysics Data System (ADS)

    Kurihara, Kenichi; Kameya, Yoshitaka; Sato, Taisuke

    The task of inducing grammar structures has received a great deal of attention. The reasons why researchers have studied are different; to use grammar induction as the first stage in building large treebanks or to make up better language models. However, grammar induction has inherent computational complexity. To overcome it, some grammar induction algorithms add new production rules incrementally. They refine the grammar while keeping their computational complexity low. In this paper, we propose a new efficient grammar induction algorithm. Although our algorithm is similar to algorithms which learn a grammar incrementally, our algorithm uses the graphical EM algorithm instead of the Inside-Outside algorithm. We report results of learning experiments in terms of learning speeds. The results show that our algorithm learns a grammar in constant time regardless of the size of the grammar. Since our algorithm decreases syntactic ambiguities in each step, our algorithm reduces required time for learning. This constant-time learning considerably affects learning time for larger grammars. We also reports results of evaluation of criteria to choose nonterminals. Our algorithm refines a grammar based on a nonterminal in each step. Since there can be several criteria to decide which nonterminal is the best, we evaluate them by learning experiments.

  4. Algorithm for Increasing Traffic Capacity of Level-Crossing Using Scheduling Theory and Intelligent Embedded Devices

    NASA Astrophysics Data System (ADS)

    Alps, Ivars; Gorobetz, Mikhail; Levchenkov, Anatoly

    2011-01-01

    In this paper the authors present heuristics algorithm for level-crossing traffic capacity increasing. The genetic algorithm is proposed for this task solution. The control of motion speed and operation with level-crossing barriers are proposed to create control centre and installed embedded intelligent devices on railway vehicles. Algorithm is tested using computer. The results of experiments show big promises for rail transport schedule fulfilment and level-crossing traffic capacity increasing using proposed algorithm.

  5. The Soil Moisture Active Passive Mission (SMAP) Science Data Products: Results of Testing with Field Experiment and Algorithm Testbed Simulation Environment Data

    NASA Technical Reports Server (NTRS)

    Entekhabi, Dara; Njoku, Eni E.; O'Neill, Peggy E.; Kellogg, Kent H.; Entin, Jared K.

    2010-01-01

    Talk outline 1. Derivation of SMAP basic and applied science requirements from the NRC Earth Science Decadal Survey applications 2. Data products and latencies 3. Algorithm highlights 4. SMAP Algorithm Testbed 5. SMAP Working Groups and community engagement

  6. An Image Encryption Algorithm Based on Information Hiding

    NASA Astrophysics Data System (ADS)

    Ge, Xin; Lu, Bin; Liu, Fenlin; Gong, Daofu

    Aiming at resolving the conflict between security and efficiency in the design of chaotic image encryption algorithms, an image encryption algorithm based on information hiding is proposed based on the “one-time pad” idea. A random parameter is introduced to ensure a different keystream for each encryption, which has the characteristics of “one-time pad”, improving the security of the algorithm rapidly without significant increase in algorithm complexity. The random parameter is embedded into the ciphered image with information hiding technology, which avoids negotiation for its transport and makes the application of the algorithm easier. Algorithm analysis and experiments show that the algorithm is secure against chosen plaintext attack, differential attack and divide-and-conquer attack, and has good statistical properties in ciphered images.

  7. A robust fuzzy local information C-Means clustering algorithm.

    PubMed

    Krinidis, Stelios; Chatzis, Vassilios

    2010-05-01

    This paper presents a variation of fuzzy c-means (FCM) algorithm that provides image clustering. The proposed algorithm incorporates the local spatial information and gray level information in a novel fuzzy way. The new algorithm is called fuzzy local information C-Means (FLICM). FLICM can overcome the disadvantages of the known fuzzy c-means algorithms and at the same time enhances the clustering performance. The major characteristic of FLICM is the use of a fuzzy local (both spatial and gray level) similarity measure, aiming to guarantee noise insensitiveness and image detail preservation. Furthermore, the proposed algorithm is fully free of the empirically adjusted parameters (a, ¿(g), ¿(s), etc.) incorporated into all other fuzzy c-means algorithms proposed in the literature. Experiments performed on synthetic and real-world images show that FLICM algorithm is effective and efficient, providing robustness to noisy images.

  8. Short-term volcano-tectonic earthquake forecasts based on a moving mean recurrence time algorithm: the El Hierro seismo-volcanic crisis experience

    NASA Astrophysics Data System (ADS)

    García, Alicia; De la Cruz-Reyna, Servando; Marrero, José M.; Ortiz, Ramón

    2016-05-01

    Under certain conditions, volcano-tectonic (VT) earthquakes may pose significant hazards to people living in or near active volcanic regions, especially on volcanic islands; however, hazard arising from VT activity caused by localized volcanic sources is rarely addressed in the literature. The evolution of VT earthquakes resulting from a magmatic intrusion shows some orderly behaviour that may allow the occurrence and magnitude of major events to be forecast. Thus governmental decision makers can be supplied with warnings of the increased probability of larger-magnitude earthquakes on the short-term timescale. We present here a methodology for forecasting the occurrence of large-magnitude VT events during volcanic crises; it is based on a mean recurrence time (MRT) algorithm that translates the Gutenberg-Richter distribution parameter fluctuations into time windows of increased probability of a major VT earthquake. The MRT forecasting algorithm was developed after observing a repetitive pattern in the seismic swarm episodes occurring between July and November 2011 at El Hierro (Canary Islands). From then on, this methodology has been applied to the consecutive seismic crises registered at El Hierro, achieving a high success rate in the real-time forecasting, within 10-day time windows, of volcano-tectonic earthquakes.

  9. Genetic algorithms

    NASA Technical Reports Server (NTRS)

    Wang, Lui; Bayer, Steven E.

    1991-01-01

    Genetic algorithms are mathematical, highly parallel, adaptive search procedures (i.e., problem solving methods) based loosely on the processes of natural genetics and Darwinian survival of the fittest. Basic genetic algorithms concepts are introduced, genetic algorithm applications are introduced, and results are presented from a project to develop a software tool that will enable the widespread use of genetic algorithm technology.

  10. Physical Validation of GPM Retrieval Algorithms Over Land: An Overview of the Mid-Latitude Continental Convective Clouds Experiment (MC3E)

    NASA Astrophysics Data System (ADS)

    Petersen, W. A.; Jensen, M. P.

    2011-12-01

    The joint NASA GPM - DOE ARM Midlatitude Continental Convective Clouds Experiment (MC3E) was conducted from April 22-June 6, 2011, centered on the DOE-ARM Southern Great Plains Central Facility site in northern Oklahoma. GPM field campaign objectives focused on the collection of airborne and ground-based measurements of warm-season continental precipitation processes to support refinement of GPM retrieval algorithm physics over land, and to improve the fidelity of coupled cloud resolving and land-surface satellite simulator models. DOE ARM objectives were synergistically focused on relating observations of cloud microphysics and the surrounding environment to feedbacks on convective system dynamics, an effort driven by the need to better represent those interactions in numerical modeling frameworks. More specific topics addressed by MC3E include ice processes and ice characteristics as coupled to precipitation at the surface and radiometer signals measured in space, the correlation properties of rainfall and drop size distributions and impacts on dual-frequency radar retrieval algorithms, the transition of cloud water to rain water (e.g., autoconversion processes) and the vertical distribution of cloud water in precipitating clouds, and vertical draft structure statistics in cumulus convection. The MC3E observational strategy relied on NASA ER-2 high-altitude airborne multi-frequency radar (HIWRAP Ka-Ku band) and radiometer (AMPR, CoSMIR; 10-183 GHz) sampling (a GPM "proxy") over an atmospheric column being simultaneously profiled in situ by the University of North Dakota Citation microphysics aircraft, an array of ground-based multi-frequency scanning polarimetric radars (DOE Ka-W, X and C-band; NASA D3R Ka-Ku and NPOL S-bands) and wind-profilers (S/UHF bands), supported by a dense network of over 20 disdrometers and rain gauges, all nested in the coverage of a six-station mesoscale rawinsonde network. As an exploratory effort to examine land-surface emissivity

  11. Physical Validation of GPM Retrieval Algorithms Over Land: An Overview of the Mid-Latitude Continental Convective Clouds Experiment (MC3E)

    NASA Technical Reports Server (NTRS)

    Petersen, Walter A.; Jensen, Michael P.

    2011-01-01

    The joint NASA Global Precipitation Measurement (GPM) -- DOE Atmospheric Radiation Measurement (ARM) Midlatitude Continental Convective Clouds Experiment (MC3E) was conducted from April 22-June 6, 2011, centered on the DOE-ARM Southern Great Plains Central Facility site in northern Oklahoma. GPM field campaign objectives focused on the collection of airborne and ground-based measurements of warm-season continental precipitation processes to support refinement of GPM retrieval algorithm physics over land, and to improve the fidelity of coupled cloud resolving and land-surface satellite simulator models. DOE ARM objectives were synergistically focused on relating observations of cloud microphysics and the surrounding environment to feedbacks on convective system dynamics, an effort driven by the need to better represent those interactions in numerical modeling frameworks. More specific topics addressed by MC3E include ice processes and ice characteristics as coupled to precipitation at the surface and radiometer signals measured in space, the correlation properties of rainfall and drop size distributions and impacts on dual-frequency radar retrieval algorithms, the transition of cloud water to rain water (e.g., autoconversion processes) and the vertical distribution of cloud water in precipitating clouds, and vertical draft structure statistics in cumulus convection. The MC3E observational strategy relied on NASA ER-2 high-altitude airborne multi-frequency radar (HIWRAP Ka-Ku band) and radiometer (AMPR, CoSMIR; 10-183 GHz) sampling (a GPM "proxy") over an atmospheric column being simultaneously profiled in situ by the University of North Dakota Citation microphysics aircraft, an array of ground-based multi-frequency scanning polarimetric radars (DOE Ka-W, X and C-band; NASA D3R Ka-Ku and NPOL S-bands) and wind-profilers (S/UHF bands), supported by a dense network of over 20 disdrometers and rain gauges, all nested in the coverage of a six-station mesoscale rawinsonde

  12. Predictors of Post-operative Mycetoma Recurrence Using Machine-Learning Algorithms: The Mycetoma Research Center Experience

    PubMed Central

    Wadal, Ali; Elhassan, Tusneem Ahmed; Zein, Hajer Ahmed; Abdel-Rahman, Manar Elsheikh; Fahal, Ahmed Hassan

    2016-01-01

    Post-operative recurrence in mycetoma after adequate medical and surgical treatment is common and a serious problem. It has health, socio-economic and psychological detrimental effects on patients and families. It is with this in mind, we set out to determine the predictors of post-operative recurrence in mycetoma. The study included 1013 patients with Madurella mycetomatis causing eumycetoma who underwent surgical excision at the Mycetoma Research Centre, Khartoum, Sudan in the period 1991–2015. The clinical records of these patients were reviewed and relevant information was collected using a pre-designed data collection sheet. The study showed, 276 patients (27.2%) of the studied population developed post-operative recurrence, 217 were males (78.6%) and 59 were females (21.4%). Their age ranged between 5 to 70 years with a mean of 32 years. The disease duration at presentation ranged between 2 months and 17 years. The majority of the patients 118 (42.8%) had mycetoma of 1 year duration. In this study, students were the most affected; 105 (38%) followed by workers 70 (25.4%), then farmers 48(17.3%). The majority of the patients were from the Central Sudan 207 (75%), Western Sudan 53 (19.2%) while 11 patients (4%) were from the Northern part. Past history of surgical intervention performed elsewhere was reported in 196 patients (71.1%). Family history of mycetoma was reported in 50 patients (18.1%). The foot was the most affected site, 245 (88.7%), followed by the hand seen in 19 (6.8%) patients and 44 (4.5%) had different sites involvement. Most of the patients 258 (93.5%) had wide local surgical excisions while 18 had major amputation. The model predicted that the certain groups have a high risk of recurrence, and these include patients with disease duration greater than 10 years and extra-pedal mycetoma. Patients with disease duration between [5–10] years, with pedal mycetoma, who had previous surgery, with positive family history and underwent wide local

  13. Robust face recognition algorithm for identifition of disaster victims

    NASA Astrophysics Data System (ADS)

    Gevaert, Wouter J. R.; de With, Peter H. N.

    2013-02-01

    We present a robust face recognition algorithm for the identification of occluded, injured and mutilated faces with a limited training set per person. In such cases, the conventional face recognition methods fall short due to specific aspects in the classification. The proposed algorithm involves recursive Principle Component Analysis for reconstruction of afiected facial parts, followed by a feature extractor based on Gabor wavelets and uniform multi-scale Local Binary Patterns. As a classifier, a Radial Basis Neural Network is employed. In terms of robustness to facial abnormalities, tests show that the proposed algorithm outperforms conventional face recognition algorithms like, the Eigenfaces approach, Local Binary Patterns and the Gabor magnitude method. To mimic real-life conditions in which the algorithm would have to operate, specific databases have been constructed and merged with partial existing databases and jointly compiled. Experiments on these particular databases show that the proposed algorithm achieves recognition rates beyond 95%.

  14. Showing What They Know

    ERIC Educational Resources Information Center

    Cech, Scott J.

    2008-01-01

    Having students show their skills in three dimensions, known as performance-based assessment, dates back at least to Socrates. Individual schools such as Barrington High School--located just outside of Providence--have been requiring students to actively demonstrate their knowledge for years. The Rhode Island's high school graduating class became…

  15. The Ozone Show.

    ERIC Educational Resources Information Center

    Mathieu, Aaron

    2000-01-01

    Uses a talk show activity for a final assessment tool for students to debate about the ozone hole. Students are assessed on five areas: (1) cooperative learning; (2) the written component; (3) content; (4) self-evaluation; and (5) peer evaluation. (SAH)

  16. What Do Maps Show?

    ERIC Educational Resources Information Center

    Geological Survey (Dept. of Interior), Reston, VA.

    This curriculum packet, appropriate for grades 4-8, features a teaching poster which shows different types of maps (different views of Salt Lake City, Utah), as well as three reproducible maps and reproducible activity sheets which complement the maps. The poster provides teacher background, including step-by-step lesson plans for four geography…

  17. Show Me the Way

    ERIC Educational Resources Information Center

    Dicks, Matthew J.

    2005-01-01

    Because today's students have grown up steeped in video games and the Internet, most of them expect feedback, and usually gratification, very soon after they expend effort on a task. Teachers can get quick feedback to students by showing them videotapes of their learning performances. The author, a 3rd grade teacher describes how the seemingly…

  18. Chemistry Game Shows

    NASA Astrophysics Data System (ADS)

    Campbell, Susan; Muzyka, Jennifer

    2002-04-01

    We present a technological improvement to the use of game shows to help students review for tests. Our approach uses HTML files interpreted with a browser on a computer attached to an LCD projector. The HTML files can be easily modified for use of the game in a variety of courses.

  19. Talk Show Science.

    ERIC Educational Resources Information Center

    Moore, Mitzi Ruth

    1992-01-01

    Proposes having students perform skits in which they play the roles of the science concepts they are trying to understand. Provides the dialog for a skit in which hot and cold gas molecules are interviewed on a talk show to study how these properties affect wind, rain, and other weather phenomena. (MDH)

  20. Stage a Water Show

    ERIC Educational Resources Information Center

    Frasier, Debra

    2008-01-01

    In the author's book titled "The Incredible Water Show," the characters from "Miss Alaineus: A Vocabulary Disaster" used an ocean of information to stage an inventive performance about the water cycle. In this article, the author relates how she turned the story into hands-on science teaching for real-life fifth-grade students. The author also…

  1. An enhanced dynamic hash TRIE algorithm for lexicon search

    NASA Astrophysics Data System (ADS)

    Yang, Lai; Xu, Lida; Shi, Zhongzhi

    2012-11-01

    Information retrieval (IR) is essential to enterprise systems along with growing orders, customers and materials. In this article, an enhanced dynamic hash TRIE (eDH-TRIE) algorithm is proposed that can be used in a lexicon search in Chinese, Japanese and Korean (CJK) segmentation and in URL identification. In particular, the eDH-TRIE algorithm is suitable for Unicode retrieval. The Auto-Array algorithm and Hash-Array algorithm are proposed to handle the auxiliary memory allocation; the former changes its size on demand without redundant restructuring, and the latter replaces linked lists with arrays, saving the overhead of memory. Comparative experiments show that the Auto-Array algorithm and Hash-Array algorithm have better spatial performance; they can be used in a multitude of situations. The eDH-TRIE is evaluated for both speed and storage and compared with the naïve DH-TRIE algorithms. The experiments show that the eDH-TRIE algorithm performs better. These algorithms reduce memory overheads and speed up IR.

  2. MREIT experiments with 200 µA injected currents: a feasibility study using two reconstruction algorithms, SMM and harmonic B(Z).

    PubMed

    Arpinar, V E; Hamamura, M J; Degirmenci, E; Muftuler, L T

    2012-07-07

    Magnetic resonance electrical impedance tomography (MREIT) is a technique that produces images of conductivity in tissues and phantoms. In this technique, electrical currents are applied to an object and the resulting magnetic flux density is measured using magnetic resonance imaging (MRI) and the conductivity distribution is reconstructed using these MRI data. Currently, the technique is used in research environments, primarily studying phantoms and animals. In order to translate MREIT to clinical applications, strict safety standards need to be established, especially for safe current limits. However, there are currently no standards for safe current limits specific to MREIT. Until such standards are established, human MREIT applications need to conform to existing electrical safety standards in medical instrumentation, such as IEC601. This protocol limits patient auxiliary currents to 100 µA for low frequencies. However, published MREIT studies have utilized currents 10-400 times larger than this limit, bringing into question whether the clinical applications of MREIT are attainable under current standards. In this study, we investigated the feasibility of MREIT to accurately reconstruct the relative conductivity of a simple agarose phantom using 200 µA total injected current and tested the performance of two MREIT reconstruction algorithms. These reconstruction algorithms used are the iterative sensitivity matrix method (SMM) by Ider and Birgul (1998 Elektrik 6 215-25) with Tikhonov regularization and the harmonic B(Z) proposed by Oh et al (2003 Magn. Reason. Med. 50 875-8). The reconstruction techniques were tested at both 200 µA and 5 mA injected currents to investigate their noise sensitivity at low and high current conditions. It should be noted that 200 µA total injected current into a cylindrical phantom generates only 14.7 µA current in imaging slice. Similarly, 5 mA total injected current results in 367 µA in imaging slice. Total

  3. Selecting materialized views using random algorithm

    NASA Astrophysics Data System (ADS)

    Zhou, Lijuan; Hao, Zhongxiao; Liu, Chi

    2007-04-01

    The data warehouse is a repository of information collected from multiple possibly heterogeneous autonomous distributed databases. The information stored at the data warehouse is in form of views referred to as materialized views. The selection of the materialized views is one of the most important decisions in designing a data warehouse. Materialized views are stored in the data warehouse for the purpose of efficiently implementing on-line analytical processing queries. The first issue for the user to consider is query response time. So in this paper, we develop algorithms to select a set of views to materialize in data warehouse in order to minimize the total view maintenance cost under the constraint of a given query response time. We call it query_cost view_ selection problem. First, cost graph and cost model of query_cost view_ selection problem are presented. Second, the methods for selecting materialized views by using random algorithms are presented. The genetic algorithm is applied to the materialized views selection problem. But with the development of genetic process, the legal solution produced become more and more difficult, so a lot of solutions are eliminated and producing time of the solutions is lengthened in genetic algorithm. Therefore, improved algorithm has been presented in this paper, which is the combination of simulated annealing algorithm and genetic algorithm for the purpose of solving the query cost view selection problem. Finally, in order to test the function and efficiency of our algorithms experiment simulation is adopted. The experiments show that the given methods can provide near-optimal solutions in limited time and works better in practical cases. Randomized algorithms will become invaluable tools for data warehouse evolution.

  4. Memetic algorithm for community detection in networks.

    PubMed

    Gong, Maoguo; Fu, Bao; Jiao, Licheng; Du, Haifeng

    2011-11-01

    Community structure is one of the most important properties in networks, and community detection has received an enormous amount of attention in recent years. Modularity is by far the most used and best known quality function for measuring the quality of a partition of a network, and many community detection algorithms are developed to optimize it. However, there is a resolution limit problem in modularity optimization methods. In this study, a memetic algorithm, named Meme-Net, is proposed to optimize another quality function, modularity density, which includes a tunable parameter that allows one to explore the network at different resolutions. Our proposed algorithm is a synergy of a genetic algorithm with a hill-climbing strategy as the local search procedure. Experiments on computer-generated and real-world networks show the effectiveness and the multiresolution ability of the proposed method.

  5. Public medical shows.

    PubMed

    Walusinski, Olivier

    2014-01-01

    In the second half of the 19th century, Jean-Martin Charcot (1825-1893) became famous for the quality of his teaching and his innovative neurological discoveries, bringing many French and foreign students to Paris. A hunger for recognition, together with progressive and anticlerical ideals, led Charcot to invite writers, journalists, and politicians to his lessons, during which he presented the results of his work on hysteria. These events became public performances, for which physicians and patients were transformed into actors. Major newspapers ran accounts of these consultations, more like theatrical shows in some respects. The resultant enthusiasm prompted other physicians in Paris and throughout France to try and imitate them. We will compare the form and substance of Charcot's lessons with those given by Jules-Bernard Luys (1828-1897), Victor Dumontpallier (1826-1899), Ambroise-Auguste Liébault (1823-1904), Hippolyte Bernheim (1840-1919), Joseph Grasset (1849-1918), and Albert Pitres (1848-1928). We will also note their impact on contemporary cinema and theatre.

  6. Quantum Algorithms

    NASA Technical Reports Server (NTRS)

    Abrams, D.; Williams, C.

    1999-01-01

    This thesis describes several new quantum algorithms. These include a polynomial time algorithm that uses a quantum fast Fourier transform to find eigenvalues and eigenvectors of a Hamiltonian operator, and that can be applied in cases for which all know classical algorithms require exponential time.

  7. The Great Cometary Show

    NASA Astrophysics Data System (ADS)

    2007-01-01

    its high spatial and spectral resolution, it was possible to zoom into the very heart of this very massive star. In this innermost region, the observations are dominated by the extremely dense stellar wind that totally obscures the underlying central star. The AMBER observations show that this dense stellar wind is not spherically symmetric, but exhibits a clearly elongated structure. Overall, the AMBER observations confirm that the extremely high mass loss of Eta Carinae's massive central star is non-spherical and much stronger along the poles than in the equatorial plane. This is in agreement with theoretical models that predict such an enhanced polar mass-loss in the case of rapidly rotating stars. ESO PR Photo 06c/07 ESO PR Photo 06c/07 RS Ophiuchi in Outburst Several papers from this special feature focus on the later stages in a star's life. One looks at the binary system Gamma 2 Velorum, which contains the closest example of a star known as a Wolf-Rayet. A single AMBER observation allowed the astronomers to separate the spectra of the two components, offering new insights in the modeling of Wolf-Rayet stars, but made it also possible to measure the separation between the two stars. This led to a new determination of the distance of the system, showing that previous estimates were incorrect. The observations also revealed information on the region where the winds from the two stars collide. The famous binary system RS Ophiuchi, an example of a recurrent nova, was observed just 5 days after it was discovered to be in outburst on 12 February 2006, an event that has been expected for 21 years. AMBER was able to detect the extension of the expanding nova emission. These observations show a complex geometry and kinematics, far from the simple interpretation of a spherical fireball in extension. AMBER has detected a high velocity jet probably perpendicular to the orbital plane of the binary system, and allowed a precise and careful study of the wind and the shockwave

  8. Development of response models for the Earth Radiation Budget Experiment (ERBE) sensors. Part 4: Preliminary nonscanner models and count conversion algorithms

    NASA Technical Reports Server (NTRS)

    Halyo, Nesim; Choi, Sang H.

    1987-01-01

    Two count conversion algorithms and the associated dynamic sensor model for the M/WFOV nonscanner radiometers are defined. The sensor model provides and updates the constants necessary for the conversion algorithms, though the frequency with which these updates were needed was uncertain. This analysis therefore develops mathematical models for the conversion of irradiance at the sensor field of view (FOV) limiter into data counts, derives from this model two algorithms for the conversion of data counts to irradiance at the sensor FOV aperture and develops measurement models which account for a specific target source together with a sensor. The resulting algorithms are of the gain/offset and Kalman filter types. The gain/offset algorithm was chosen since it provided sufficient accuracy using simpler computations.

  9. Stretched View Showing 'Victoria'

    NASA Technical Reports Server (NTRS)

    2006-01-01

    [figure removed for brevity, see original site] Stretched View Showing 'Victoria'

    This pair of images from the panoramic camera on NASA's Mars Exploration Rover Opportunity served as initial confirmation that the two-year-old rover is within sight of 'Victoria Crater,' which it has been approaching for more than a year. Engineers on the rover team were unsure whether Opportunity would make it as far as Victoria, but scientists hoped for the chance to study such a large crater with their roving geologist. Victoria Crater is 800 meters (nearly half a mile) in diameter, about six times wider than 'Endurance Crater,' where Opportunity spent several months in 2004 examining rock layers affected by ancient water.

    When scientists using orbital data calculated that they should be able to detect Victoria's rim in rover images, they scrutinized frames taken in the direction of the crater by the panoramic camera. To positively characterize the subtle horizon profile of the crater and some of the features leading up to it, researchers created a vertically-stretched image (top) from a mosaic of regular frames from the panoramic camera (bottom), taken on Opportunity's 804th Martian day (April 29, 2006).

    The stretched image makes mild nearby dunes look like more threatening peaks, but that is only a result of the exaggerated vertical dimension. This vertical stretch technique was first applied to Viking Lander 2 panoramas by Philip Stooke, of the University of Western Ontario, Canada, to help locate the lander with respect to orbiter images. Vertically stretching the image allows features to be more readily identified by the Mars Exploration Rover science team.

    The bright white dot near the horizon to the right of center (barely visible without labeling or zoom-in) is thought to be a light-toned outcrop on the far wall of the crater, suggesting that the rover can see over the low rim of Victoria. In figure 1, the northeast and southeast rims are labeled

  10. Improved artificial bee colony algorithm based gravity matching navigation method.

    PubMed

    Gao, Wei; Zhao, Bo; Zhou, Guang Tao; Wang, Qiu Ying; Yu, Chun Yang

    2014-07-18

    Gravity matching navigation algorithm is one of the key technologies for gravity aided inertial navigation systems. With the development of intelligent algorithms, the powerful search ability of the Artificial Bee Colony (ABC) algorithm makes it possible to be applied to the gravity matching navigation field. However, existing search mechanisms of basic ABC algorithms cannot meet the need for high accuracy in gravity aided navigation. Firstly, proper modifications are proposed to improve the performance of the basic ABC algorithm. Secondly, a new search mechanism is presented in this paper which is based on an improved ABC algorithm using external speed information. At last, modified Hausdorff distance is introduced to screen the possible matching results. Both simulations and ocean experiments verify the feasibility of the method, and results show that the matching rate of the method is high enough to obtain a precise matching position.

  11. An Improved Ant Algorithm for Grid Task Scheduling Strategy

    NASA Astrophysics Data System (ADS)

    Wei, Laizhi; Zhang, Xiaobin; Li, Yun; Li, Yujie

    Task scheduling is an important factor that directly influences the performance and efficiency of the system. Grid resources are usually distributed in different geographic locations, belonging to different organizations and resources' properties are vastly different, in order to complete efficiently, intelligently task scheduling, the choice of scheduling strategy is essential. This paper proposes an improved ant algorithm for grid task scheduling strategy, by introducing a new type pheromone and a new node redistribution selection rule. On the one hand, the algorithm can track performances of resources and tag it. On the other hand, add algorithm to deal with task scheduling unsuccessful situations that improve the algorithm's robustness and the successful probability of task allocation and reduce unnecessary overhead of system, shortening the total time to complete tasks. The data obtained from simulation experiment shows that use this algorithm to resolve schedule problem better than traditional ant algorithm.

  12. Algorithm for dynamic Speckle pattern processing

    NASA Astrophysics Data System (ADS)

    Cariñe, J.; Guzmán, R.; Torres-Ruiz, F. A.

    2016-07-01

    In this paper we present a new algorithm for determining surface activity by processing speckle pattern images recorded with a CCD camera. Surface activity can be produced by motility or small displacements among other causes, and is manifested as a change in the pattern recorded in the camera with reference to a static background pattern. This intensity variation is considered to be a small perturbation compared with the mean intensity. Based on a perturbative method we obtain an equation with which we can infer information about the dynamic behavior of the surface that generates the speckle pattern. We define an activity index based on our algorithm that can be easily compared with the outcomes from other algorithms. It is shown experimentally that this index evolves in time in the same way as the Inertia Moment method, however our algorithm is based on direct processing of speckle patterns without the need for other kinds of post-processes (like THSP and co-occurrence matrix), making it a viable real-time method. We also show how this algorithm compares with several other algorithms when applied to calibration experiments. From these results we conclude that our algorithm offer qualitative and quantitative advantages over current methods.

  13. Inclusive Flavour Tagging Algorithm

    NASA Astrophysics Data System (ADS)

    Likhomanenko, Tatiana; Derkach, Denis; Rogozhnikov, Alex

    2016-10-01

    Identifying the flavour of neutral B mesons production is one of the most important components needed in the study of time-dependent CP violation. The harsh environment of the Large Hadron Collider makes it particularly hard to succeed in this task. We present an inclusive flavour-tagging algorithm as an upgrade of the algorithms currently used by the LHCb experiment. Specifically, a probabilistic model which efficiently combines information from reconstructed vertices and tracks using machine learning is proposed. The algorithm does not use information about underlying physics process. It reduces the dependence on the performance of lower level identification capacities and thus increases the overall performance. The proposed inclusive flavour-tagging algorithm is applicable to tag the flavour of B mesons in any proton-proton experiment.

  14. Improvement of depth resolution in depth-resolved wavenumber-scanning interferometry using wavenumber-domain least-squares algorithm: comparison and experiment.

    PubMed

    Bai, Yulei; Jia, Quanjie; Zhang, Yun; Huang, Qiquan; Yang, Qiyu; Ye, Shuangli; He, Zhaoshui; Zhou, Yanzhou; Xie, Shengli

    2016-05-01

    It is important to improve the depth resolution in depth-resolved wavenumber-scanning interferometry (DRWSI) owing to the limited range of wavenumber scanning. In this work, a new nonlinear iterative least-squares algorithm called the wavenumber-domain least-squares algorithm (WLSA) is proposed for evaluating the phase of DRWSI. The simulated and experimental results of the Fourier transform (FT), complex-number least-squares algorithm (CNLSA), eigenvalue-decomposition and least-squares algorithm (EDLSA), and WLSA were compared and analyzed. According to the results, the WLSA is less dependent on the initial values, and the depth resolution δz is approximately changed from δz to δz/6. Thus, the WLSA exhibits a better performance than the FT, CNLSA, and EDLSA.

  15. Comparison of swarm intelligence algorithms in atmospheric compensation for free space optical communication

    NASA Astrophysics Data System (ADS)

    Li, Zhaokun; Cao, Jingtai; Liu, Wei; Feng, Jianfeng; Zhao, Xiaohui

    2015-03-01

    We use conventional adaptive optical system to compensate atmospheric turbulence in free space optical (FSO) communication system under strong scintillation circumstances, undesired wave-front measurements based on Shark-Hartman sensor (SH). Since wavefront sensor-less adaptive optics is a feasible option, we propose several swarm intelligence algorithms to compensate the wavefront aberration from atmospheric interference in FSO and mainly discuss the algorithm principle, basic flows, and simulation result. The numerical simulation experiment and result analysis show that compared with SPGD algorithm, the proposed algorithms can effectively restrain wavefront aberration, and improve convergence rate of the algorithms and the coupling efficiency of receiver in large extent.

  16. An automated blood vessel segmentation algorithm using histogram equalization and automatic threshold selection.

    PubMed

    Saleh, Marwan D; Eswaran, C; Mueen, Ahmed

    2011-08-01

    This paper focuses on the detection of retinal blood vessels which play a vital role in reducing the proliferative diabetic retinopathy and for preventing the loss of visual capability. The proposed algorithm which takes advantage of the powerful preprocessing techniques such as the contrast enhancement and thresholding offers an automated segmentation procedure for retinal blood vessels. To evaluate the performance of the new algorithm, experiments are conducted on 40 images collected from DRIVE database. The results show that the proposed algorithm performs better than the other known algorithms in terms of accuracy. Furthermore, the proposed algorithm being simple and easy to implement, is best suited for fast processing applications.

  17. An efficient algorithm for retinal blood vessel segmentation using h-maxima transform and multilevel thresholding.

    PubMed

    Saleh, Marwan D; Eswaran, C

    2012-01-01

    Retinal blood vessel detection and analysis play vital roles in early diagnosis and prevention of several diseases, such as hypertension, diabetes, arteriosclerosis, cardiovascular disease and stroke. This paper presents an automated algorithm for retinal blood vessel segmentation. The proposed algorithm takes advantage of powerful image processing techniques such as contrast enhancement, filtration and thresholding for more efficient segmentation. To evaluate the performance of the proposed algorithm, experiments were conducted on 40 images collected from DRIVE database. The results show that the proposed algorithm yields an accuracy rate of 96.5%, which is higher than the results achieved by other known algorithms.

  18. Multi-heuristic dynamic task allocation using genetic algorithms in a heterogeneous distributed system.

    PubMed

    Page, Andrew J; Keane, Thomas M; Naughton, Thomas J

    2010-07-01

    We present a multi-heuristic evolutionary task allocation algorithm to dynamically map tasks to processors in a heterogeneous distributed system. It utilizes a genetic algorithm, combined with eight common heuristics, in an effort to minimize the total execution time. It operates on batches of unmapped tasks and can preemptively remap tasks to processors. The algorithm has been implemented on a Java distributed system and evaluated with a set of six problems from the areas of bioinformatics, biomedical engineering, computer science and cryptography. Experiments using up to 150 heterogeneous processors show that the algorithm achieves better efficiency than other state-of-the-art heuristic algorithms.

  19. Genetic-based EM algorithm for learning Gaussian mixture models.

    PubMed

    Pernkopf, Franz; Bouchaffra, Djamel

    2005-08-01

    We propose a genetic-based expectation-maximization (GA-EM) algorithm for learning Gaussian mixture models from multivariate data. This algorithm is capable of selecting the number of components of the model using the minimum description length (MDL) criterion. Our approach benefits from the properties of Genetic algorithms (GA) and the EM algorithm by combination of both into a single procedure. The population-based stochastic search of the GA explores the search space more thoroughly than the EM method. Therefore, our algorithm enables escaping from local optimal solutions since the algorithm becomes less sensitive to its initialization. The GA-EM algorithm is elitist which maintains the monotonic convergence property of the EM algorithm. The experiments on simulated and real data show that the GA-EM outperforms the EM method since: 1) We have obtained a better MDL score while using exactly the same termination condition for both algorithms. 2) Our approach identifies the number of components which were used to generate the underlying data more often than the EM algorithm.

  20. A new image encryption algorithm based on logistic chaotic map with varying parameter.

    PubMed

    Liu, Lingfeng; Miao, Suoxia

    2016-01-01

    In this paper, we proposed a new image encryption algorithm based on parameter-varied logistic chaotic map and dynamical algorithm. The parameter-varied logistic map can cure the weaknesses of logistic map and resist the phase space reconstruction attack. We use the parameter-varied logistic map to shuffle the plain image, and then use a dynamical algorithm to encrypt the image. We carry out several experiments, including Histogram analysis, information entropy analysis, sensitivity analysis, key space analysis, correlation analysis and computational complexity to evaluate its performances. The experiment results show that this algorithm is with high security and can be competitive for image encryption.

  1. Automatic design of decision-tree algorithms with evolutionary algorithms.

    PubMed

    Barros, Rodrigo C; Basgalupp, Márcio P; de Carvalho, André C P L F; Freitas, Alex A

    2013-01-01

    This study reports the empirical analysis of a hyper-heuristic evolutionary algorithm that is capable of automatically designing top-down decision-tree induction algorithms. Top-down decision-tree algorithms are of great importance, considering their ability to provide an intuitive and accurate knowledge representation for classification problems. The automatic design of these algorithms seems timely, given the large literature accumulated over more than 40 years of research in the manual design of decision-tree induction algorithms. The proposed hyper-heuristic evolutionary algorithm, HEAD-DT, is extensively tested using 20 public UCI datasets and 10 microarray gene expression datasets. The algorithms automatically designed by HEAD-DT are compared with traditional decision-tree induction algorithms, such as C4.5 and CART. Experimental results show that HEAD-DT is capable of generating algorithms which are significantly more accurate than C4.5 and CART.

  2. Speckle imaging algorithms for planetary imaging

    SciTech Connect

    Johansson, E.

    1994-11-15

    I will discuss the speckle imaging algorithms used to process images of the impact sites of the collision of comet Shoemaker-Levy 9 with Jupiter. The algorithms use a phase retrieval process based on the average bispectrum of the speckle image data. High resolution images are produced by estimating the Fourier magnitude and Fourier phase of the image separately, then combining them and inverse transforming to achieve the final result. I will show raw speckle image data and high-resolution image reconstructions from our recent experiment at Lick Observatory.

  3. Diagnostic algorithms in Charcot-Marie-Tooth neuropathies: experiences from a German genetic laboratory on the basis of 1206 index patients.

    PubMed

    Rudnik-Schöneborn, S; Tölle, D; Senderek, J; Eggermann, K; Elbracht, M; Kornak, U; von der Hagen, M; Kirschner, J; Leube, B; Müller-Felber, W; Schara, U; von Au, K; Wieczorek, D; Bußmann, C; Zerres, K

    2016-01-01

    We present clinical features and genetic results of 1206 index patients and 124 affected relatives who were referred for genetic testing of Charcot-Marie-Tooth (CMT) neuropathy at the laboratory in Aachen between 2001 and 2012. Genetic detection rates were 56% in demyelinating CMT (71% of autosomal dominant (AD) CMT1/CMTX), and 17% in axonal CMT (24% of AD CMT2/CMTX). Three genetic defects (PMP22 duplication/deletion, GJB1/Cx32 or MPZ/P0 mutation) were responsible for 89.3% of demyelinating CMT index patients in whom a genetic diagnosis was achieved, and the diagnostic yield of the three main genetic defects in axonal CMT (GJB1/Cx32, MFN2, MPZ/P0 mutations) was 84.2%. De novo mutations were detected in 1.3% of PMP22 duplication, 25% of MPZ/P0, and none in GJB1/Cx32. Motor nerve conduction velocity was uniformly <38 m/s in median or ulnar nerves in PMP22 duplication, >40 m/s in MFN2, and more variable in GJB1/Cx32, MPZ/P0 mutations. Patients with CMT2A showed a broad clinical severity regardless of the type or position of the MFN2 mutation. Out of 75 patients, 8 patients (11%) with PMP22 deletions were categorized as CMT1 or CMT2. Diagnostic algorithms are still useful for cost-efficient mutation detection and for the interpretation of large-scale genetic data made available by next generation sequencing strategies.

  4. Research on video target tracking technology based on improved SIFT algorithm

    NASA Astrophysics Data System (ADS)

    Zhuang, Zhemin; Guo, Zhijie; Yuang, Ye

    2017-01-01

    A novel target tracking algorithm based on improved SIFT (Scale Invariant Feature Transform (SIFT) algorithm is proposed in this paper. In order to improve real-time performance, the processing neighborhood of SIFT has been improved to decrease the complexity of calculation, and the dimension of the SIFT vector is set from 128 to 40. Simulations and experiments show this improved algorithm brings us low computation complexity and high tracking accuracy and robustness.

  5. Human performance models and rear-end collision avoidance algorithms.

    PubMed

    Brown, T L; Lee, J D; McGehee, D V

    2001-01-01

    Collision warning systems offer a promising approach to mitigate rear-end collisions, but substantial uncertainty exists regarding the joint performance of the driver and the collision warning algorithms. A simple deterministic model of driver performance was used to examine kinematics-based and perceptual-based rear-end collision avoidance algorithms over a range of collision situations, algorithm parameters, and assumptions regarding driver performance. The results show that the assumptions concerning driver reaction times have important consequences for algorithm performance, with underestimates dramatically undermining the safety benefit of the warning. Additionally, under some circumstances, when drivers rely on the warning algorithms, larger headways can result in more severe collisions. This reflects the nonlinear interaction among the collision situation, the algorithm, and driver response that should not be attributed to the complexities of driver behavior but to the kinematics of the situation. Comparisons made with experimental data demonstrate that a simple human performance model can capture important elements of system performance and complement expensive human-in-the-loop experiments. Actual or potential applications of this research include selection of an appropriate algorithm, more accurate specification of algorithm parameters, and guidance for future experiments.

  6. Restart-Based Genetic Algorithm for the Quadratic Assignment Problem

    NASA Astrophysics Data System (ADS)

    Misevicius, Alfonsas

    The power of genetic algorithms (GAs) has been demonstrated for various domains of the computer science, including combinatorial optimization. In this paper, we propose a new conceptual modification of the genetic algorithm entitled a "restart-based genetic algorithm" (RGA). An effective implementation of RGA for a well-known combinatorial optimization problem, the quadratic assignment problem (QAP), is discussed. The results obtained from the computational experiments on the QAP instances from the publicly available library QAPLIB show excellent performance of RGA. This is especially true for the real-life like QAPs.

  7. An improved corner detection algorithm for image sequence

    NASA Astrophysics Data System (ADS)

    Yan, Minqi; Zhang, Bianlian; Guo, Min; Tian, Guangyuan; Liu, Feng; Huo, Zeng

    2014-11-01

    A SUSAN corner detection algorithm for a sequence of images is proposed in this paper, The correlation matching algorithm is treated for the coarse positioning of the detection area, after that, SUSAN corner detection is used to obtain interesting points of the target. The SUSAN corner detection has been improved. For the situation that the points of a small area are often detected as corner points incorrectly, the neighbor direction filter is applied to reduce the rate of mistakes. Experiment results show that the algorithm enhances the anti-noise performance, improve the accuracy of detection.

  8. Rate control algorithm based on frame complexity estimation for MVC

    NASA Astrophysics Data System (ADS)

    Yan, Tao; An, Ping; Shen, Liquan; Zhang, Zhaoyang

    2010-07-01

    Rate control has not been well studied for multi-view video coding (MVC). In this paper, we propose an efficient rate control algorithm for MVC by improving the quadratic rate-distortion (R-D) model, which reasonably allocate bit-rate among views based on correlation analysis. The proposed algorithm consists of four levels for rate bits control more accurately, of which the frame layer allocates bits according to frame complexity and temporal activity. Extensive experiments show that the proposed algorithm can efficiently implement bit allocation and rate control according to coding parameters.

  9. A component-labeling algorithm based on contour tracing

    NASA Astrophysics Data System (ADS)

    Qiu, Liudong; Li, Zushu

    2007-12-01

    A new method for finding connected components from binary images is presented in this paper. The main step of this method is to use a contour tracing technique to detect component contours, and use the information of contour to fill in interior areas. All the component points are traced by this algorithm in a single pass and are assigned either a new label or the same label of the contour pixels. Comparative experiment results show that Our algorithm, moreover, is a fast method that not only labels components but also extracts component contours at the same time, which proves to be more useful than those algorithms that only label components.

  10. Training Feedforward Neural Networks Using Symbiotic Organisms Search Algorithm

    PubMed Central

    Wu, Haizhou; Luo, Qifang

    2016-01-01

    Symbiotic organisms search (SOS) is a new robust and powerful metaheuristic algorithm, which stimulates the symbiotic interaction strategies adopted by organisms to survive and propagate in the ecosystem. In the supervised learning area, it is a challenging task to present a satisfactory and efficient training algorithm for feedforward neural networks (FNNs). In this paper, SOS is employed as a new method for training FNNs. To investigate the performance of the aforementioned method, eight different datasets selected from the UCI machine learning repository are employed for experiment and the results are compared among seven metaheuristic algorithms. The results show that SOS performs better than other algorithms for training FNNs in terms of converging speed. It is also proven that an FNN trained by the method of SOS has better accuracy than most algorithms compared. PMID:28105044

  11. An ant colony algorithm on continuous searching space

    NASA Astrophysics Data System (ADS)

    Xie, Jing; Cai, Chao

    2015-12-01

    Ant colony algorithm is heuristic, bionic and parallel. Because of it is property of positive feedback, parallelism and simplicity to cooperate with other method, it is widely adopted in planning on discrete space. But it is still not good at planning on continuous space. After a basic introduction to the basic ant colony algorithm, we will propose an ant colony algorithm on continuous space. Our method makes use of the following three tricks. We search for the next nodes of the route according to fixed-step to guarantee the continuity of solution. When storing pheromone, it discretizes field of pheromone, clusters states and sums up the values of pheromone of these states. When updating pheromone, it makes good resolutions measured in relative score functions leave more pheromone, so that ant colony algorithm can find a sub-optimal solution in shorter time. The simulated experiment shows that our ant colony algorithm can find sub-optimal solution in relatively shorter time.

  12. Scheduling algorithms

    NASA Astrophysics Data System (ADS)

    Wolfe, William J.; Wood, David; Sorensen, Stephen E.

    1996-12-01

    This paper discusses automated scheduling as it applies to complex domains such as factories, transportation, and communications systems. The window-constrained-packing problem is introduced as an ideal model of the scheduling trade offs. Specific algorithms are compared in terms of simplicity, speed, and accuracy. In particular, dispatch, look-ahead, and genetic algorithms are statistically compared on randomly generated job sets. The conclusion is that dispatch methods are fast and fairly accurate; while modern algorithms, such as genetic and simulate annealing, have excessive run times, and are too complex to be practical.

  13. PCNN document segmentation method based on bacterial foraging optimization algorithm

    NASA Astrophysics Data System (ADS)

    Liao, Yanping; Zhang, Peng; Guo, Qiang; Wan, Jian

    2014-04-01

    Pulse Coupled Neural Network(PCNN) is widely used in the field of image processing, but it is a difficult task to define the relative parameters properly in the research of the applications of PCNN. So far the determination of parameters of its model needs a lot of experiments. To deal with the above problem, a document segmentation based on the improved PCNN is proposed. It uses the maximum entropy function as the fitness function of bacterial foraging optimization algorithm, adopts bacterial foraging optimization algorithm to search the optimal parameters, and eliminates the trouble of manually set the experiment parameters. Experimental results show that the proposed algorithm can effectively complete document segmentation. And result of the segmentation is better than the contrast algorithms.

  14. The Algorithm Selection Problem

    NASA Technical Reports Server (NTRS)

    Minton, Steve; Allen, John; Deiss, Ron (Technical Monitor)

    1994-01-01

    Work on NP-hard problems has shown that many instances of these theoretically computationally difficult problems are quite easy. The field has also shown that choosing the right algorithm for the problem can have a profound effect on the time needed to find a solution. However, to date there has been little work showing how to select the right algorithm for solving any particular problem. The paper refers to this as the algorithm selection problem. It describes some of the aspects that make this problem difficult, as well as proposes a technique for addressing it.

  15. Designing experiments through compressed sensing.

    SciTech Connect

    Young, Joseph G.; Ridzal, Denis

    2013-06-01

    In the following paper, we discuss how to design an ensemble of experiments through the use of compressed sensing. Specifically, we show how to conduct a small number of physical experiments and then use compressed sensing to reconstruct a larger set of data. In order to accomplish this, we organize our results into four sections. We begin by extending the theory of compressed sensing to a finite product of Hilbert spaces. Then, we show how these results apply to experiment design. Next, we develop an efficient reconstruction algorithm that allows us to reconstruct experimental data projected onto a finite element basis. Finally, we verify our approach with two computational experiments.

  16. Fractal Landscape Algorithms for Environmental Simulations

    NASA Astrophysics Data System (ADS)

    Mao, H.; Moran, S.

    2014-12-01

    Natural science and geographical research are now able to take advantage of environmental simulations that more accurately test experimental hypotheses, resulting in deeper understanding. Experiments affected by the natural environment can benefit from 3D landscape simulations capable of simulating a variety of terrains and environmental phenomena. Such simulations can employ random terrain generation algorithms that dynamically simulate environments to test specific models against a variety of factors. Through the use of noise functions such as Perlin noise, Simplex noise, and diamond square algorithms, computers can generate simulations that model a variety of landscapes and ecosystems. This study shows how these algorithms work together to create realistic landscapes. By seeding values into the diamond square algorithm, one can control the shape of landscape. Perlin noise and Simplex noise are also used to simulate moisture and temperature. The smooth gradient created by coherent noise allows more realistic landscapes to be simulated. Terrain generation algorithms can be used in environmental studies and physics simulations. Potential studies that would benefit from simulations include the geophysical impact of flash floods or drought on a particular region and regional impacts on low lying area due to global warming and rising sea levels. Furthermore, terrain generation algorithms also serve as aesthetic tools to display landscapes (Google Earth), and simulate planetary landscapes. Hence, it can be used as a tool to assist science education. Algorithms used to generate these natural phenomena provide scientists a different approach in analyzing our world. The random algorithms used in terrain generation not only contribute to the generating the terrains themselves, but are also capable of simulating weather patterns.

  17. Parallel Wolff Cluster Algorithms

    NASA Astrophysics Data System (ADS)

    Bae, S.; Ko, S. H.; Coddington, P. D.

    The Wolff single-cluster algorithm is the most efficient method known for Monte Carlo simulation of many spin models. Due to the irregular size, shape and position of the Wolff clusters, this method does not easily lend itself to efficient parallel implementation, so that simulations using this method have thus far been confined to workstations and vector machines. Here we present two parallel implementations of this algorithm, and show that one gives fairly good performance on a MIMD parallel computer.

  18. One high-accuracy camera calibration algorithm based on computer vision images

    NASA Astrophysics Data System (ADS)

    Wang, Ying; Huang, Jianming; Wei, Xiangquan

    2015-12-01

    Camera calibration is the first step of computer vision and one of the most active research fields nowadays. In order to improve the measurement precision, the internal parameters of the camera should be accurately calibrated. So one high-accuracy camera calibration algorithm is proposed based on the images of planar targets or tridimensional targets. By using the algorithm, the internal parameters of the camera are calibrated based on the existing planar target at the vision-based navigation experiment. The experimental results show that the accuracy of the proposed algorithm is obviously improved compared with the conventional linear algorithm, Tsai general algorithm, and Zhang Zhengyou calibration algorithm. The algorithm proposed by the article can satisfy the need of computer vision and provide reference for precise measurement of the relative position and attitude.

  19. Improved Quantum Artificial Fish Algorithm Application to Distributed Network Considering Distributed Generation.

    PubMed

    Du, Tingsong; Hu, Yang; Ke, Xianting

    2015-01-01

    An improved quantum artificial fish swarm algorithm (IQAFSA) for solving distributed network programming considering distributed generation is proposed in this work. The IQAFSA based on quantum computing which has exponential acceleration for heuristic algorithm uses quantum bits to code artificial fish and quantum revolving gate, preying behavior, and following behavior and variation of quantum artificial fish to update the artificial fish for searching for optimal value. Then, we apply the proposed new algorithm, the quantum artificial fish swarm algorithm (QAFSA), the basic artificial fish swarm algorithm (BAFSA), and the global edition artificial fish swarm algorithm (GAFSA) to the simulation experiments for some typical test functions, respectively. The simulation results demonstrate that the proposed algorithm can escape from the local extremum effectively and has higher convergence speed and better accuracy. Finally, applying IQAFSA to distributed network problems and the simulation results for 33-bus radial distribution network system show that IQAFSA can get the minimum power loss after comparing with BAFSA, GAFSA, and QAFSA.

  20. A proximity algorithm accelerated by Gauss-Seidel iterations for L1/TV denoising models

    NASA Astrophysics Data System (ADS)

    Li, Qia; Micchelli, Charles A.; Shen, Lixin; Xu, Yuesheng

    2012-09-01

    Our goal in this paper is to improve the computational performance of the proximity algorithms for the L1/TV denoising model. This leads us to a new characterization of all solutions to the L1/TV model via fixed-point equations expressed in terms of the proximity operators. Based upon this observation we develop an algorithm for solving the model and establish its convergence. Furthermore, we demonstrate that the proposed algorithm can be accelerated through the use of the componentwise Gauss-Seidel iteration so that the CPU time consumed is significantly reduced. Numerical experiments using the proposed algorithm for impulsive noise removal are included, with a comparison to three recently developed algorithms. The numerical results show that while the proposed algorithm enjoys a high quality of the restored images, as the other three known algorithms do, it performs significantly better in terms of computational efficiency measured in the CPU time consumed.

  1. Flocking algorithm for autonomous flying robots.

    PubMed

    Virágh, Csaba; Vásárhelyi, Gábor; Tarcai, Norbert; Szörényi, Tamás; Somorjai, Gergő; Nepusz, Tamás; Vicsek, Tamás

    2014-06-01

    Animal swarms displaying a variety of typical flocking patterns would not exist without the underlying safe, optimal and stable dynamics of the individuals. The emergence of these universal patterns can be efficiently reconstructed with agent-based models. If we want to reproduce these patterns with artificial systems, such as autonomous aerial robots, agent-based models can also be used in their control algorithms. However, finding the proper algorithms and thus understanding the essential characteristics of the emergent collective behaviour requires thorough and realistic modeling of the robot and also the environment. In this paper, we first present an abstract mathematical model of an autonomous flying robot. The model takes into account several realistic features, such as time delay and locality of communication, inaccuracy of the on-board sensors and inertial effects. We present two decentralized control algorithms. One is based on a simple self-propelled flocking model of animal collective motion, the other is a collective target tracking algorithm. Both algorithms contain a viscous friction-like term, which aligns the velocities of neighbouring agents parallel to each other. We show that this term can be essential for reducing the inherent instabilities of such a noisy and delayed realistic system. We discuss simulation results on the stability of the control algorithms, and perform real experiments to show the applicability of the algorithms on a group of autonomous quadcopters. In our case, bio-inspiration works in two ways. On the one hand, the whole idea of trying to build and control a swarm of robots comes from the observation that birds tend to flock to optimize their behaviour as a group. On the other hand, by using a realistic simulation framework and studying the group behaviour of autonomous robots we can learn about the major factors influencing the flight of bird flocks.

  2. A hybrid cuckoo search algorithm with Nelder Mead method for solving global optimization problems.

    PubMed

    Ali, Ahmed F; Tawhid, Mohamed A

    2016-01-01

    Cuckoo search algorithm is a promising metaheuristic population based method. It has been applied to solve many real life problems. In this paper, we propose a new cuckoo search algorithm by combining the cuckoo search algorithm with the Nelder-Mead method in order to solve the integer and minimax optimization problems. We call the proposed algorithm by hybrid cuckoo search and Nelder-Mead method (HCSNM). HCSNM starts the search by applying the standard cuckoo search for number of iterations then the best obtained solution is passing to the Nelder-Mead algorithm as an intensification process in order to accelerate the search and overcome the slow convergence of the standard cuckoo search algorithm. The proposed algorithm is balancing between the global exploration of the Cuckoo search algorithm and the deep exploitation of the Nelder-Mead method. We test HCSNM algorithm on seven integer programming problems and ten minimax problems and compare against eight algorithms for solving integer programming problems and seven algorithms for solving minimax problems. The experiments results show the efficiency of the proposed algorithm and its ability to solve integer and minimax optimization problems in reasonable time.

  3. Cuckoo Search Algorithm Based on Repeat-Cycle Asymptotic Self-Learning and Self-Evolving Disturbance for Function Optimization.

    PubMed

    Wang, Jie-sheng; Li, Shu-xia; Song, Jiang-di

    2015-01-01

    In order to improve convergence velocity and optimization accuracy of the cuckoo search (CS) algorithm for solving the function optimization problems, a new improved cuckoo search algorithm based on the repeat-cycle asymptotic self-learning and self-evolving disturbance (RC-SSCS) is proposed. A disturbance operation is added into the algorithm by constructing a disturbance factor to make a more careful and thorough search near the bird's nests location. In order to select a reasonable repeat-cycled disturbance number, a further study on the choice of disturbance times is made. Finally, six typical test functions are adopted to carry out simulation experiments, meanwhile, compare algorithms of this paper with two typical swarm intelligence algorithms particle swarm optimization (PSO) algorithm and artificial bee colony (ABC) algorithm. The results show that the improved cuckoo search algorithm has better convergence velocity and optimization accuracy.

  4. A compilation of jet finding algorithms

    SciTech Connect

    Flaugher, B.; Meier, K.

    1992-12-31

    Technical descriptions of jet finding algorithms currently in use in p{anti p} collider experiments (CDF, UA1, UA2), e{sup +}e{sup {minus}} experiments and Monte-Carlo event generators (LUND programs, ISAJET) have been collected. For the hadron collider experiments, the clustering methods fall into two categories: cone algorithms and nearest-neighbor algorithms. In addition, UA2 has employed a combination of both methods for some analysis. While there are clearly differences between the cone and nearest-neighbor algorithms, the authors have found that there are also differences among the cone algorithms in the details of how the centroid of a cone cluster is located and how the E{sub T} and P{sub T} of the jet are defined. The most commonly used jet algorithm in electron-positron experiments is the JADE-type cluster algorithm. Five various incarnations of this approach have been described.

  5. Research on Chord Searching Algorithm Base on Cache Strategy

    NASA Astrophysics Data System (ADS)

    Jun, Guo; Chen, Chen

    How to improve search efficiency is a core problem in P2P network, Chord is a successful searching algorithm, but its lookup efficiency is lower because finger table has redundant information proposed the recently visited table and improved to gain more useful information in Chord. The simulation experiments show that approach can availably improve the routing efficiently.

  6. Component Labeling Algorithm For Video Rate Processing

    NASA Astrophysics Data System (ADS)

    Gotoh, Toshiyuki; Ohta, Yoshiyuki; Yoshida, Masumi; Shirai, Yoshio

    1987-10-01

    In this paper, we propose a raster scanning algorithm for component labeling, which enables processing under pipeline architecture. In the raster scanning algorithm, labels are provisionally assigned to each pixel of components and, at the same time, the connectivities of labels are detected at first scan. Those labels are classified into groups based on the connectivities. Finally provisional labels are updated using the result of classification and a unique label is assigned to each pixel of components. However, in the conventional algorithm, the classification process needs a vast number of operations. This prevents realizing pipeline processing. We have developed a method of preprocessing to reduce the number of provisional labels, which limits the number of label connectivities. We have also developed a new classification method whose operation is proportionate to only the number of label connectivities itself. We have made experiments with computer simulation to verify this algorithm. The experimental results show that we can process 512 x 512 x 8 bit images at video rate(1/30 sec. per 1 image) when this algorithm is implemented on hardware.

  7. Kernel MAD Algorithm for Relative Radiometric Normalization

    NASA Astrophysics Data System (ADS)

    Bai, Yang; Tang, Ping; Hu, Changmiao

    2016-06-01

    The multivariate alteration detection (MAD) algorithm is commonly used in relative radiometric normalization. This algorithm is based on linear canonical correlation analysis (CCA) which can analyze only linear relationships among bands. Therefore, we first introduce a new version of MAD in this study based on the established method known as kernel canonical correlation analysis (KCCA). The proposed method effectively extracts the non-linear and complex relationships among variables. We then conduct relative radiometric normalization experiments on both the linear CCA and KCCA version of the MAD algorithm with the use of Landsat-8 data of Beijing, China, and Gaofen-1(GF-1) data derived from South China. Finally, we analyze the difference between the two methods. Results show that the KCCA-based MAD can be satisfactorily applied to relative radiometric normalization, this algorithm can well describe the nonlinear relationship between multi-temporal images. This work is the first attempt to apply a KCCA-based MAD algorithm to relative radiometric normalization.

  8. A multi-level solution algorithm for steady-state Markov chains

    NASA Technical Reports Server (NTRS)

    Horton, Graham; Leutenegger, Scott T.

    1993-01-01

    A new iterative algorithm, the multi-level algorithm, for the numerical solution of steady state Markov chains is presented. The method utilizes a set of recursively coarsened representations of the original system to achieve accelerated convergence. It is motivated by multigrid methods, which are widely used for fast solution of partial differential equations. Initial results of numerical experiments are reported, showing significant reductions in computation time, often an order of magnitude or more, relative to the Gauss-Seidel and optimal SOR algorithms for a variety of test problems. The multi-level method is compared and contrasted with the iterative aggregation-disaggregation algorithm of Takahashi.

  9. Algorithm development

    NASA Technical Reports Server (NTRS)

    Barth, Timothy J.; Lomax, Harvard

    1987-01-01

    The past decade has seen considerable activity in algorithm development for the Navier-Stokes equations. This has resulted in a wide variety of useful new techniques. Some examples for the numerical solution of the Navier-Stokes equations are presented, divided into two parts. One is devoted to the incompressible Navier-Stokes equations, and the other to the compressible form.

  10. Approximation algorithms

    PubMed Central

    Schulz, Andreas S.; Shmoys, David B.; Williamson, David P.

    1997-01-01

    Increasing global competition, rapidly changing markets, and greater consumer awareness have altered the way in which corporations do business. To become more efficient, many industries have sought to model some operational aspects by gigantic optimization problems. It is not atypical to encounter models that capture 106 separate “yes” or “no” decisions to be made. Although one could, in principle, try all 2106 possible solutions to find the optimal one, such a method would be impractically slow. Unfortunately, for most of these models, no algorithms are known that find optimal solutions with reasonable computation times. Typically, industry must rely on solutions of unguaranteed quality that are constructed in an ad hoc manner. Fortunately, for some of these models there are good approximation algorithms: algorithms that produce solutions quickly that are provably close to optimal. Over the past 6 years, there has been a sequence of major breakthroughs in our understanding of the design of approximation algorithms and of limits to obtaining such performance guarantees; this area has been one of the most flourishing areas of discrete mathematics and theoretical computer science. PMID:9370525

  11. Evolutionary pattern search algorithms

    SciTech Connect

    Hart, W.E.

    1995-09-19

    This paper defines a class of evolutionary algorithms called evolutionary pattern search algorithms (EPSAs) and analyzes their convergence properties. This class of algorithms is closely related to evolutionary programming, evolutionary strategie and real-coded genetic algorithms. EPSAs are self-adapting systems that modify the step size of the mutation operator in response to the success of previous optimization steps. The rule used to adapt the step size can be used to provide a stationary point convergence theory for EPSAs on any continuous function. This convergence theory is based on an extension of the convergence theory for generalized pattern search methods. An experimental analysis of the performance of EPSAs demonstrates that these algorithms can perform a level of global search that is comparable to that of canonical EAs. We also describe a stopping rule for EPSAs, which reliably terminated near stationary points in our experiments. This is the first stopping rule for any class of EAs that can terminate at a given distance from stationary points.

  12. Preconditioned quantum linear system algorithm.

    PubMed

    Clader, B D; Jacobs, B C; Sprouse, C R

    2013-06-21

    We describe a quantum algorithm that generalizes the quantum linear system algorithm [Harrow et al., Phys. Rev. Lett. 103, 150502 (2009)] to arbitrary problem specifications. We develop a state preparation routine that can initialize generic states, show how simple ancilla measurements can be used to calculate many quantities of interest, and integrate a quantum-compatible preconditioner that greatly expands the number of problems that can achieve exponential speedup over classical linear systems solvers. To demonstrate the algorithm's applicability, we show how it can be used to compute the electromagnetic scattering cross section of an arbitrary target exponentially faster than the best classical algorithm.

  13. Study of image matching algorithm and sub-pixel fitting algorithm in target tracking

    NASA Astrophysics Data System (ADS)

    Yang, Ming-dong; Jia, Jianjun; Qiang, Jia; Wang, Jian-yu

    2015-03-01

    Image correlation matching is a tracking method that searched a region most approximate to the target template based on the correlation measure between two images. Because there is no need to segment the image, and the computation of this method is little. Image correlation matching is a basic method of target tracking. This paper mainly studies the image matching algorithm of gray scale image, which precision is at sub-pixel level. The matching algorithm used in this paper is SAD (Sum of Absolute Difference) method. This method excels in real-time systems because of its low computation complexity. The SAD method is introduced firstly and the most frequently used sub-pixel fitting algorithms are introduced at the meantime. These fitting algorithms can't be used in real-time systems because they are too complex. However, target tracking often requires high real-time performance, we put forward a fitting algorithm named paraboloidal fitting algorithm based on the consideration above, this algorithm is simple and realized easily in real-time system. The result of this algorithm is compared with that of surface fitting algorithm through image matching simulation. By comparison, the precision difference between these two algorithms is little, it's less than 0.01pixel. In order to research the influence of target rotation on precision of image matching, the experiment of camera rotation was carried on. The detector used in the camera is a CMOS detector. It is fixed to an arc pendulum table, take pictures when the camera rotated different angles. Choose a subarea in the original picture as the template, and search the best matching spot using image matching algorithm mentioned above. The result shows that the matching error is bigger when the target rotation angle is larger. It's an approximate linear relation. Finally, the influence of noise on matching precision was researched. Gaussian noise and pepper and salt noise were added in the image respectively, and the image

  14. 15. Detail showing lower chord pinconnected to vertical member, showing ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    15. Detail showing lower chord pin-connected to vertical member, showing floor beam riveted to extension of vertical member below pin-connection, and showing brackets supporting cantilevered sidewalk. View to southwest. - Selby Avenue Bridge, Spanning Short Line Railways track at Selby Avenue between Hamline & Snelling Avenues, Saint Paul, Ramsey County, MN

  15. One improved LSB steganography algorithm

    NASA Astrophysics Data System (ADS)

    Song, Bing; Zhang, Zhi-hong

    2013-03-01

    It is easy to be detected by X2 and RS steganalysis with high accuracy that using LSB algorithm to hide information in digital image. We started by selecting information embedded location and modifying the information embedded method, combined with sub-affine transformation and matrix coding method, improved the LSB algorithm and a new LSB algorithm was proposed. Experimental results show that the improved one can resist the X2 and RS steganalysis effectively.

  16. Experiment Databases

    NASA Astrophysics Data System (ADS)

    Vanschoren, Joaquin; Blockeel, Hendrik

    Next to running machine learning algorithms based on inductive queries, much can be learned by immediately querying the combined results of many prior studies. Indeed, all around the globe, thousands of machine learning experiments are being executed on a daily basis, generating a constant stream of empirical information on machine learning techniques. While the information contained in these experiments might have many uses beyond their original intent, results are typically described very concisely in papers and discarded afterwards. If we properly store and organize these results in central databases, they can be immediately reused for further analysis, thus boosting future research. In this chapter, we propose the use of experiment databases: databases designed to collect all the necessary details of these experiments, and to intelligently organize them in online repositories to enable fast and thorough analysis of a myriad of collected results. They constitute an additional, queriable source of empirical meta-data based on principled descriptions of algorithm executions, without reimplementing the algorithms in an inductive database. As such, they engender a very dynamic, collaborative approach to experimentation, in which experiments can be freely shared, linked together, and immediately reused by researchers all over the world. They can be set up for personal use, to share results within a lab or to create open, community-wide repositories. Here, we provide a high-level overview of their design, and use an existing experiment database to answer various interesting research questions about machine learning algorithms and to verify a number of recent studies.

  17. An effective cache algorithm for heterogeneous storage systems.

    PubMed

    Li, Yong; Feng, Dan; Shi, Zhan

    2013-01-01

    Modern storage environment is commonly composed of heterogeneous storage devices. However, traditional cache algorithms exhibit performance degradation in heterogeneous storage systems because they were not designed to work with the diverse performance characteristics. In this paper, we present a new cache algorithm called HCM for heterogeneous storage systems. The HCM algorithm partitions the cache among the disks and adopts an effective scheme to balance the work across the disks. Furthermore, it applies benefit-cost analysis to choose the best allocation of cache block to improve the performance. Conducting simulations with a variety of traces and a wide range of cache size, our experiments show that HCM significantly outperforms the existing state-of-the-art storage-aware cache algorithms.

  18. An improved HMM/SVM dynamic hand gesture recognition algorithm

    NASA Astrophysics Data System (ADS)

    Zhang, Yi; Yao, Yuanyuan; Luo, Yuan

    2015-10-01

    In order to improve the recognition rate and stability of dynamic hand gesture recognition, for the low accuracy rate of the classical HMM algorithm in train the B parameter, this paper proposed an improved HMM/SVM dynamic gesture recognition algorithm. In the calculation of the B parameter of HMM model, this paper introduced the SVM algorithm which has the strong ability of classification. Through the sigmoid function converted the state output of the SVM into the probability and treat this probability as the observation state transition probability of the HMM model. After this, it optimized the B parameter of HMM model and improved the recognition rate of the system. At the same time, it also enhanced the accuracy and the real-time performance of the human-computer interaction. Experiments show that this algorithm has a strong robustness under the complex background environment and the varying illumination environment. The average recognition rate increased from 86.4% to 97.55%.

  19. New algorithms to map asymmetries of 3D surfaces.

    PubMed

    Combès, Benoît; Prima, Sylvain

    2008-01-01

    In this paper, we propose a set of new generic automated processing tools to characterise the local asymmetries of anatomical structures (represented by surfaces) at an individual level, and within/between populations. The building bricks of this toolbox are: (1) a new algorithm for robust, accurate, and fast estimation of the symmetry plane of grossly symmetrical surfaces, and (2) a new algorithm for the fast, dense, nonlinear matching of surfaces. This last algorithm is used both to compute dense individual asymmetry maps on surfaces, and to register these maps to a common template for population studies. We show these two algorithms to be mathematically well-grounded, and provide some validation experiments. Then we propose a pipeline for the statistical evaluation of local asymmetries within and between populations. Finally we present some results on real data.

  20. A layer reduction based community detection algorithm on multiplex networks

    NASA Astrophysics Data System (ADS)

    Wang, Xiaodong; Liu, Jing

    2017-04-01

    Detecting hidden communities is important for the analysis of complex networks. However, many algorithms have been designed for single layer networks (SLNs) while just a few approaches have been designed for multiplex networks (MNs). In this paper, we propose an algorithm based on layer reduction for detecting communities on MNs, which is termed as LRCD-MNs. First, we improve a layer reduction algorithm termed as neighaggre to combine similar layers and keep others separated. Then, we use neighaggre to find the community structure hidden in MNs. Experiments on real-life networks show that neighaggre can obtain higher relative entropy than the other algorithm. Moreover, we apply LRCD-MNs on some real-life and synthetic multiplex networks and the results demonstrate that, although LRCD-MNs does not have the advantage in terms of modularity, it can obtain higher values of surprise, which is used to evaluate the quality of partitions of a network.

  1. A Cultural Algorithm for the Urban Public Transportation

    NASA Astrophysics Data System (ADS)

    Reyes, Laura Cruz; Zezzatti, Carlos Alberto Ochoa Ortíz; Santillán, Claudia Gómez; Hernández, Paula Hernández; Fuerte, Mercedes Villa

    In the last years the population of Leon City, located in the state of Guanajuato in Mexico, has been considerably increasing, causing the inhabitants to waste most of their time with public transportation. As a consequence of the demographic growth and traffic bottleneck, users deal with the daily problem of optimizing their travel so that to get to their destination on time. To give a solution to this problem of obtaining an optimized route between two points in a public transportation, a method based on the cultural algorithms technique is proposed. Cultural algorithms are used in the generated knowledge in a set of time periods for a same population, using a belief space. These types of algorithms are a recent creation. The proposed method seeks a path that minimizes the time of traveling and the number of transfers. The results of the experiment show that the technique of the cultural algorithms is applicable to these kinds of multi-objective problems.

  2. A region growing vessel segmentation algorithm based on spectrum information.

    PubMed

    Jiang, Huiyan; He, Baochun; Fang, Di; Ma, Zhiyuan; Yang, Benqiang; Zhang, Libo

    2013-01-01

    We propose a region growing vessel segmentation algorithm based on spectrum information. First, the algorithm does Fourier transform on the region of interest containing vascular structures to obtain its spectrum information, according to which its primary feature direction will be extracted. Then combined edge information with primary feature direction computes the vascular structure's center points as the seed points of region growing segmentation. At last, the improved region growing method with branch-based growth strategy is used to segment the vessels. To prove the effectiveness of our algorithm, we use the retinal and abdomen liver vascular CT images to do experiments. The results show that the proposed vessel segmentation algorithm can not only extract the high quality target vessel region, but also can effectively reduce the manual intervention.

  3. Genetic Bee Colony (GBC) algorithm: A new gene selection method for microarray cancer classification.

    PubMed

    Alshamlan, Hala M; Badr, Ghada H; Alohali, Yousef A

    2015-06-01

    Naturally inspired evolutionary algorithms prove effectiveness when used for solving feature selection and classification problems. Artificial Bee Colony (ABC) is a relatively new swarm intelligence method. In this paper, we propose a new hybrid gene selection method, namely Genetic Bee Colony (GBC) algorithm. The proposed algorithm combines the used of a Genetic Algorithm (GA) along with Artificial Bee Colony (ABC) algorithm. The goal is to integrate the advantages of both algorithms. The proposed algorithm is applied to a microarray gene expression profile in order to select the most predictive and informative genes for cancer classification. In order to test the accuracy performance of the proposed algorithm, extensive experiments were conducted. Three binary microarray datasets are use, which include: colon, leukemia, and lung. In addition, another three multi-class microarray datasets are used, which are: SRBCT, lymphoma, and leukemia. Results of the GBC algorithm are compared with our recently proposed technique: mRMR when combined with the Artificial Bee Colony algorithm (mRMR-ABC). We also compared the combination of mRMR with GA (mRMR-GA) and Particle Swarm Optimization (mRMR-PSO) algorithms. In addition, we compared the GBC algorithm with other related algorithms that have been recently published in the literature, using all benchmark datasets. The GBC algorithm shows superior performance as it achieved the highest classification accuracy along with the lowest average number of selected genes. This proves that the GBC algorithm is a promising approach for solving the gene selection problem in both binary and multi-class cancer classification.

  4. Algorithm for thorough background subtraction of high-resolution LC/MS data: application to obtain clean product ion spectra from nonselective collision-induced dissociation experiments.

    PubMed

    Zhang, Haiying; Grubb, Mary; Wu, Wei; Josephs, Jonathan; Humphreys, William G

    2009-04-01

    Nonselective collision-induced dissociation (CID) is a technique for producing fragmentation products for all ions generated in an ion source. It is typical of liquid chromatography/mass spectrometry (LC/MS) analysis of complex samples that matrix-related components may contribute to the resulting product ion spectra and confound the usefulness of this technique for structure interpretation. In this proof-of-principle study, a high-resolution LC/MS-based background subtraction algorithm was used to process the nonselective CID data to obtain clean product ion spectra for metabolites in human plasma samples. With buspirone and clozapine metabolites in human plasma as examples, this approach allowed for not only facile detection of metabolites of interest but also generation of their respective product ion spectra that were clean and free of matrix-related interferences. This was demonstrated with both an MS(E) technique (where E represents collision energy) with a quadrupole time-of-flight (QTOF) instrument and an in-source fragmentation technique with an LTQ Orbitrap instrument. The combined nonselective CID and background subtraction approach should allow for detection and structural interpretation of other types of sample analyses where control samples are obtained.

  5. Research on multirobot pursuit task allocation algorithm based on emotional cooperation factor.

    PubMed

    Fang, Baofu; Chen, Lu; Wang, Hao; Dai, Shuanglu; Zhong, Qiubo

    2014-01-01

    Multirobot task allocation is a hot issue in the field of robot research. A new emotional model is used with the self-interested robot, which gives a new way to measure self-interested robots' individual cooperative willingness in the problem of multirobot task allocation. Emotional cooperation factor is introduced into self-interested robot; it is updated based on emotional attenuation and external stimuli. Then a multirobot pursuit task allocation algorithm is proposed, which is based on emotional cooperation factor. Combined with the two-step auction algorithm recruiting team leaders and team collaborators, set up pursuit teams, and finally use certain strategies to complete the pursuit task. In order to verify the effectiveness of this algorithm, some comparing experiments have been done with the instantaneous greedy optimal auction algorithm; the results of experiments show that the total pursuit time and total team revenue can be optimized by using this algorithm.

  6. Algorithm Visualization in Teaching Practice

    ERIC Educational Resources Information Center

    Törley, Gábor

    2014-01-01

    This paper presents the history of algorithm visualization (AV), highlighting teaching-methodology aspects. A combined, two-group pedagogical experiment will be presented as well, which measured the efficiency and the impact on the abstract thinking of AV. According to the results, students, who learned with AV, performed better in the experiment.

  7. Algorithm aversion: people erroneously avoid algorithms after seeing them err.

    PubMed

    Dietvorst, Berkeley J; Simmons, Joseph P; Massey, Cade

    2015-02-01

    Research shows that evidence-based algorithms more accurately predict the future than do human forecasters. Yet when forecasters are deciding whether to use a human forecaster or a statistical algorithm, they often choose the human forecaster. This phenomenon, which we call algorithm aversion, is costly, and it is important to understand its causes. We show that people are especially averse to algorithmic forecasters after seeing them perform, even when they see them outperform a human forecaster. This is because people more quickly lose confidence in algorithmic than human forecasters after seeing them make the same mistake. In 5 studies, participants either saw an algorithm make forecasts, a human make forecasts, both, or neither. They then decided whether to tie their incentives to the future predictions of the algorithm or the human. Participants who saw the algorithm perform were less confident in it, and less likely to choose it over an inferior human forecaster. This was true even among those who saw the algorithm outperform the human.

  8. Data classification using metaheuristic Cuckoo Search technique for Levenberg Marquardt back propagation (CSLM) algorithm

    NASA Astrophysics Data System (ADS)

    Nawi, Nazri Mohd.; Khan, Abdullah; Rehman, M. Z.

    2015-05-01

    A nature inspired behavior metaheuristic techniques which provide derivative-free solutions to solve complex problems. One of the latest additions to the group of nature inspired optimization procedure is Cuckoo Search (CS) algorithm. Artificial Neural Network (ANN) training is an optimization task since it is desired to find optimal weight set of a neural network in training process. Traditional training algorithms have some limitation such as getting trapped in local minima and slow convergence rate. This study proposed a new technique CSLM by combining the best features of two known algorithms back-propagation (BP) and Levenberg Marquardt algorithm (LM) for improving the convergence speed of ANN training and avoiding local minima problem by training this network. Some selected benchmark classification datasets are used for simulation. The experiment result show that the proposed cuckoo search with Levenberg Marquardt algorithm has better performance than other algorithm used in this study.

  9. Dynamically Incremental K-means++ Clustering Algorithm Based on Fuzzy Rough Set Theory

    NASA Astrophysics Data System (ADS)

    Li, Wei; Wang, Rujing; Jia, Xiufang; Jiang, Qing

    Being classic K-means++ clustering algorithm only for static data, dynamically incremental K-means++ clustering algorithm (DK-Means++) is presented based on fuzzy rough set theory in this paper. Firstly, in DK-Means++ clustering algorithm, the formula of similar degree is improved by weights computed by using of the important degree of attributes which are reduced on the basis of rough fuzzy set theory. Secondly, new data only need match granular which was clustered by K-means++ algorithm or seldom new data is clustered by classic K-means++ algorithm in global data. In this way, that all data is re-clustered each time in dynamic data set is avoided, so the efficiency of clustering is improved. Throughout our experiments showing, DK-Means++ algorithm can objectively and efficiently deal with clustering problem of dynamically incremental data.

  10. An Improved Inertial Frame Alignment Algorithm Based on Horizontal Alignment Information for Marine SINS.

    PubMed

    Che, Yanting; Wang, Qiuying; Gao, Wei; Yu, Fei

    2015-10-05

    In this paper, an improved inertial frame alignment algorithm for a marine SINS under mooring conditions is proposed, which significantly improves accuracy. Since the horizontal alignment is easy to complete, and a characteristic of gravity is that its component in the horizontal plane is zero, we use a clever method to improve the conventional inertial alignment algorithm. Firstly, a large misalignment angle model and a dimensionality reduction Gauss-Hermite filter are employed to establish the fine horizontal reference frame. Based on this, the projection of the gravity in the body inertial coordinate frame can be calculated easily. Then, the initial alignment algorithm is accomplished through an inertial frame alignment algorithm. The simulation and experiment results show that the improved initial alignment algorithm performs better than the conventional inertial alignment algorithm, and meets the accuracy requirements of a medium-accuracy marine SINS.

  11. An Indoor Continuous Positioning Algorithm on the Move by Fusing Sensors and Wi-Fi on Smartphones.

    PubMed

    Li, Huaiyu; Chen, Xiuwan; Jing, Guifei; Wang, Yuan; Cao, Yanfeng; Li, Fei; Zhang, Xinlong; Xiao, Han

    2015-12-11

    Wi-Fi indoor positioning algorithms experience large positioning error and low stability when continuously positioning terminals that are on the move. This paper proposes a novel indoor continuous positioning algorithm that is on the move, fusing sensors and Wi-Fi on smartphones. The main innovative points include an improved Wi-Fi positioning algorithm and a novel positioning fusion algorithm named the Trust Chain Positioning Fusion (TCPF) algorithm. The improved Wi-Fi positioning algorithm was designed based on the properties of Wi-Fi signals on the move, which are found in a novel "quasi-dynamic" Wi-Fi signal experiment. The TCPF algorithm is proposed to realize the "process-level" fusion of Wi-Fi and Pedestrians Dead Reckoning (PDR) positioning, including three parts: trusted point determination, trust state and positioning fusion algorithm. An experiment is carried out for verification in a typical indoor environment, and the average positioning error on the move is 1.36 m, a decrease of 28.8% compared to an existing algorithm. The results show that the proposed algorithm can effectively reduce the influence caused by the unstable Wi-Fi signals, and improve the accuracy and stability of indoor continuous positioning on the move.

  12. Algorithm to optimize transient hot-wire thermal property measurement.

    PubMed

    Bran-Anleu, Gabriela; Lavine, Adrienne S; Wirz, Richard E; Kavehpour, H Pirouz

    2014-04-01

    The transient hot-wire method has been widely used to measure the thermal conductivity of fluids. The ideal working equation is based on the solution of the transient heat conduction equation for an infinite linear heat source assuming no natural convection or thermal end effects. In practice, the assumptions inherent in the model are only valid for a portion of the measurement time. In this study, an algorithm was developed to automatically select the proper data range from a transient hot-wire experiment. Numerical simulations of the experiment were used in order to validate the algorithm. The experimental results show that the developed algorithm can be used to improve the accuracy of thermal conductivity measurements.

  13. An improved particle swarm optimization algorithm for reliability problems.

    PubMed

    Wu, Peifeng; Gao, Liqun; Zou, Dexuan; Li, Steven

    2011-01-01

    An improved particle swarm optimization (IPSO) algorithm is proposed to solve reliability problems in this paper. The IPSO designs two position updating strategies: In the early iterations, each particle flies and searches according to its own best experience with a large probability; in the late iterations, each particle flies and searches according to the fling experience of the most successful particle with a large probability. In addition, the IPSO introduces a mutation operator after position updating, which can not only prevent the IPSO from trapping into the local optimum, but also enhances its space developing ability. Experimental results show that the proposed algorithm has stronger convergence and stability than the other four particle swarm optimization algorithms on solving reliability problems, and that the solutions obtained by the IPSO are better than the previously reported best-known solutions in the recent literature.

  14. Power spectral estimation algorithms

    NASA Technical Reports Server (NTRS)

    Bhatia, Manjit S.

    1989-01-01

    Algorithms to estimate the power spectrum using Maximum Entropy Methods were developed. These algorithms were coded in FORTRAN 77 and were implemented on the VAX 780. The important considerations in this analysis are: (1) resolution, i.e., how close in frequency two spectral components can be spaced and still be identified; (2) dynamic range, i.e., how small a spectral peak can be, relative to the largest, and still be observed in the spectra; and (3) variance, i.e., how accurate the estimate of the spectra is to the actual spectra. The application of the algorithms based on Maximum Entropy Methods to a variety of data shows that these criteria are met quite well. Additional work in this direction would help confirm the findings. All of the software developed was turned over to the technical monitor. A copy of a typical program is included. Some of the actual data and graphs used on this data are also included.

  15. Temperature Corrected Bootstrap Algorithm

    NASA Technical Reports Server (NTRS)

    Comiso, Joey C.; Zwally, H. Jay

    1997-01-01

    A temperature corrected Bootstrap Algorithm has been developed using Nimbus-7 Scanning Multichannel Microwave Radiometer data in preparation to the upcoming AMSR instrument aboard ADEOS and EOS-PM. The procedure first calculates the effective surface emissivity using emissivities of ice and water at 6 GHz and a mixing formulation that utilizes ice concentrations derived using the current Bootstrap algorithm but using brightness temperatures from 6 GHz and 37 GHz channels. These effective emissivities are then used to calculate surface ice which in turn are used to convert the 18 GHz and 37 GHz brightness temperatures to emissivities. Ice concentrations are then derived using the same technique as with the Bootstrap algorithm but using emissivities instead of brightness temperatures. The results show significant improvement in the area where ice temperature is expected to vary considerably such as near the continental areas in the Antarctic, where the ice temperature is colder than average, and in marginal ice zones.

  16. Optimisation of nonlinear motion cueing algorithm based on genetic algorithm

    NASA Astrophysics Data System (ADS)

    Asadi, Houshyar; Mohamed, Shady; Rahim Zadeh, Delpak; Nahavandi, Saeid

    2015-04-01

    Motion cueing algorithms (MCAs) are playing a significant role in driving simulators, aiming to deliver the most accurate human sensation to the simulator drivers compared with a real vehicle driver, without exceeding the physical limitations of the simulator. This paper provides the optimisation design of an MCA for a vehicle simulator, in order to find the most suitable washout algorithm parameters, while respecting all motion platform physical limitations, and minimising human perception error between real and simulator driver. One of the main limitations of the classical washout filters is that it is attuned by the worst-case scenario tuning method. This is based on trial and error, and is effected by driving and programmers experience, making this the most significant obstacle to full motion platform utilisation. This leads to inflexibility of the structure, production of false cues and makes the resulting simulator fail to suit all circumstances. In addition, the classical method does not take minimisation of human perception error and physical constraints into account. Production of motion cues and the impact of different parameters of classical washout filters on motion cues remain inaccessible for designers for this reason. The aim of this paper is to provide an optimisation method for tuning the MCA parameters, based on nonlinear filtering and genetic algorithms. This is done by taking vestibular sensation error into account between real and simulated cases, as well as main dynamic limitations, tilt coordination and correlation coefficient. Three additional compensatory linear blocks are integrated into the MCA, to be tuned in order to modify the performance of the filters successfully. The proposed optimised MCA is implemented in MATLAB/Simulink software packages. The results generated using the proposed method show increased performance in terms of human sensation, reference shape tracking and exploiting the platform more efficiently without reaching

  17. Planning a Successful Tech Show

    ERIC Educational Resources Information Center

    Nikirk, Martin

    2011-01-01

    Tech shows are a great way to introduce prospective students, parents, and local business and industry to a technology and engineering or career and technical education program. In addition to showcasing instructional programs, a tech show allows students to demonstrate their professionalism and skills, practice public presentations, and interact…

  18. Evaluation of Algorithms for Compressing Hyperspectral Data

    NASA Technical Reports Server (NTRS)

    Cook, Sid; Harsanyi, Joseph; Faber, Vance

    2003-01-01

    With EO-1 Hyperion in orbit NASA is showing their continued commitment to hyperspectral imaging (HSI). As HSI sensor technology continues to mature, the ever-increasing amounts of sensor data generated will result in a need for more cost effective communication and data handling systems. Lockheed Martin, with considerable experience in spacecraft design and developing special purpose onboard processors, has teamed with Applied Signal & Image Technology (ASIT), who has an extensive heritage in HSI spectral compression and Mapping Science (MSI) for JPEG 2000 spatial compression expertise, to develop a real-time and intelligent onboard processing (OBP) system to reduce HSI sensor downlink requirements. Our goal is to reduce the downlink requirement by a factor > 100, while retaining the necessary spectral and spatial fidelity of the sensor data needed to satisfy the many science, military, and intelligence goals of these systems. Our compression algorithms leverage commercial-off-the-shelf (COTS) spectral and spatial exploitation algorithms. We are currently in the process of evaluating these compression algorithms using statistical analysis and NASA scientists. We are also developing special purpose processors for executing these algorithms onboard a spacecraft.

  19. Satellite Animation Shows California Storms

    NASA Video Gallery

    This animation of visible and infrared imagery from NOAA's GOES-West satellite shows a series of moisture-laden storms affecting California from Jan. 6 through Jan. 9, 2017. TRT: 00:36 Credit: NASA...

  20. Satellite Movie Shows Erika Dissipate

    NASA Video Gallery

    This animation of visible and infrared imagery from NOAA's GOES-West satellite from Aug. 27 to 29 shows Tropical Storm Erika move through the Eastern Caribbean Sea and dissipate near eastern Cuba. ...

  1. SAR image registration based on Susan algorithm

    NASA Astrophysics Data System (ADS)

    Wang, Chun-bo; Fu, Shao-hua; Wei, Zhong-yi

    2011-10-01

    Synthetic Aperture Radar (SAR) is an active remote sensing system which can be installed on aircraft, satellite and other carriers with the advantages of all day and night and all-weather ability. It is the important problem that how to deal with SAR and extract information reasonably and efficiently. Particularly SAR image geometric correction is the bottleneck to impede the application of SAR. In this paper we introduces image registration and the Susan algorithm knowledge firstly, then introduces the process of SAR image registration based on Susan algorithm and finally presents experimental results of SAR image registration. The Experiment shows that this method is effective and applicable, no matter from calculating the time or from the calculation accuracy.

  2. CUDT: a CUDA based decision tree algorithm.

    PubMed

    Lo, Win-Tsung; Chang, Yue-Shan; Sheu, Ruey-Kai; Chiu, Chun-Chieh; Yuan, Shyan-Ming

    2014-01-01

    Decision tree is one of the famous classification methods in data mining. Many researches have been proposed, which were focusing on improving the performance of decision tree. However, those algorithms are developed and run on traditional distributed systems. Obviously the latency could not be improved while processing huge data generated by ubiquitous sensing node in the era without new technology help. In order to improve data processing latency in huge data mining, in this paper, we design and implement a new parallelized decision tree algorithm on a CUDA (compute unified device architecture), which is a GPGPU solution provided by NVIDIA. In the proposed system, CPU is responsible for flow control while the GPU is responsible for computation. We have conducted many experiments to evaluate system performance of CUDT and made a comparison with traditional CPU version. The results show that CUDT is 5 ∼ 55 times faster than Weka-j48 and is 18 times speedup than SPRINT for large data set.

  3. Fireworks Algorithm with Enhanced Fireworks Interaction.

    PubMed

    Zhang, Bei; Zheng, Yu-Jun; Zhang, Min-Xia; Chen, Sheng-Yong

    2017-01-01

    As a relatively new metaheuristic in swarm intelligence, fireworks algorithm (FWA) has exhibited promising performance on a wide range of optimization problems. This paper aims to improve FWA by enhancing fireworks interaction in three aspects: 1) Developing a new Gaussian mutation operator to make sparks learn from more exemplars; 2) Integrating the regular explosion operator of FWA with the migration operator of biogeography-based optimization (BBO) to increase information sharing; 3) Adopting a new population selection strategy that enables high-quality solutions to have high probabilities of entering the next generation without incurring high computational cost. The combination of the three strategies can significantly enhance fireworks interaction and thus improve solution diversity and suppress premature convergence. Numerical experiments on the CEC 2015 single-objective optimization test problems show the effectiveness of the proposed algorithm. The application to a high-speed train scheduling problem also demonstrates its feasibility in real-world optimization problems.

  4. Improved imaging algorithm for bridge crack detection

    NASA Astrophysics Data System (ADS)

    Lu, Jingxiao; Song, Pingli; Han, Kaihong

    2012-04-01

    This paper present an improved imaging algorithm for bridge crack detection, through optimizing the eight-direction Sobel edge detection operator, making the positioning of edge points more accurate than without the optimization, and effectively reducing the false edges information, so as to facilitate follow-up treatment. In calculating the crack geometry characteristics, we use the method of extracting skeleton on single crack length. In order to calculate crack area, we construct the template of area by making logical bitwise AND operation of the crack image. After experiment, the results show errors of the crack detection method and actual manual measurement are within an acceptable range, meet the needs of engineering applications. This algorithm is high-speed and effective for automated crack measurement, it can provide more valid data for proper planning and appropriate performance of the maintenance and rehabilitation processes of bridge.

  5. Comparison of l₁-Norm SVR and Sparse Coding Algorithms for Linear Regression.

    PubMed

    Zhang, Qingtian; Hu, Xiaolin; Zhang, Bo

    2015-08-01

    Support vector regression (SVR) is a popular function estimation technique based on Vapnik's concept of support vector machine. Among many variants, the l1-norm SVR is known to be good at selecting useful features when the features are redundant. Sparse coding (SC) is a technique widely used in many areas and a number of efficient algorithms are available. Both l1-norm SVR and SC can be used for linear regression. In this brief, the close connection between the l1-norm SVR and SC is revealed and some typical algorithms are compared for linear regression. The results show that the SC algorithms outperform the Newton linear programming algorithm, an efficient l1-norm SVR algorithm, in efficiency. The algorithms are then used to design the radial basis function (RBF) neural networks. Experiments on some benchmark data sets demonstrate the high efficiency of the SC algorithms. In particular, one of the SC algorithms, the orthogonal matching pursuit is two orders of magnitude faster than a well-known RBF network designing algorithm, the orthogonal least squares algorithm.

  6. An Improved SoC Test Scheduling Method Based on Simulated Annealing Algorithm

    NASA Astrophysics Data System (ADS)

    Zheng, Jingjing; Shen, Zhihang; Gao, Huaien; Chen, Bianna; Zheng, Weida; Xiong, Xiaoming

    2017-02-01

    In this paper, we propose an improved SoC test scheduling method based on simulated annealing algorithm (SA). It is our first to disorganize IP core assignment for each TAM to produce a new solution for SA, allocate TAM width for each TAM using greedy algorithm and calculate corresponding testing time. And accepting the core assignment according to the principle of simulated annealing algorithm and finally attain the optimum solution. Simultaneously, we run the test scheduling experiment with the international reference circuits provided by International Test Conference 2002(ITC’02) and the result shows that our algorithm is superior to the conventional integer linear programming algorithm (ILP), simulated annealing algorithm (SA) and genetic algorithm(GA). When TAM width reaches to 48,56 and 64, the testing time based on our algorithm is lesser than the classic methods and the optimization rates are 30.74%, 3.32%, 16.13% respectively. Moreover, the testing time based on our algorithm is very close to that of improved genetic algorithm (IGA), which is state-of-the-art at present.

  7. Preconditioned Alternating Projection Algorithms for Maximum a Posteriori ECT Reconstruction

    PubMed Central

    Krol, Andrzej; Li, Si; Shen, Lixin; Xu, Yuesheng

    2012-01-01

    We propose a preconditioned alternating projection algorithm (PAPA) for solving the maximum a posteriori (MAP) emission computed tomography (ECT) reconstruction problem. Specifically, we formulate the reconstruction problem as a constrained convex optimization problem with the total variation (TV) regularization. We then characterize the solution of the constrained convex optimization problem and show that it satisfies a system of fixed-point equations defined in terms of two proximity operators raised from the convex functions that define the TV-norm and the constrain involved in the problem. The characterization (of the solution) via the proximity operators that define two projection operators naturally leads to an alternating projection algorithm for finding the solution. For efficient numerical computation, we introduce to the alternating projection algorithm a preconditioning matrix (the EM-preconditioner) for the dense system matrix involved in the optimization problem. We prove theoretically convergence of the preconditioned alternating projection algorithm. In numerical experiments, performance of our algorithms, with an appropriately selected preconditioning matrix, is compared with performance of the conventional MAP expectation-maximization (MAP-EM) algorithm with TV regularizer (EM-TV) and that of the recently developed nested EM-TV algorithm for ECT reconstruction. Based on the numerical experiments performed in this work, we observe that the alternating projection algorithm with the EM-preconditioner outperforms significantly the EM-TV in all aspects including the convergence speed, the noise in the reconstructed images and the image quality. It also outperforms the nested EM-TV in the convergence speed while providing comparable image quality. PMID:23271835

  8. Genetic Algorithms for Multiple-Choice Problems

    NASA Astrophysics Data System (ADS)

    Aickelin, Uwe

    2010-04-01

    This thesis investigates the use of problem-specific knowledge to enhance a genetic algorithm approach to multiple-choice optimisation problems.It shows that such information can significantly enhance performance, but that the choice of information and the way it is included are important factors for success.Two multiple-choice problems are considered.The first is constructing a feasible nurse roster that considers as many requests as possible.In the second problem, shops are allocated to locations in a mall subject to constraints and maximising the overall income.Genetic algorithms are chosen for their well-known robustness and ability to solve large and complex discrete optimisation problems.However, a survey of the literature reveals room for further research into generic ways to include constraints into a genetic algorithm framework.Hence, the main theme of this work is to balance feasibility and cost of solutions.In particular, co-operative co-evolution with hierarchical sub-populations, problem structure exploiting repair schemes and indirect genetic algorithms with self-adjusting decoder functions are identified as promising approaches.The research starts by applying standard genetic algorithms to the problems and explaining the failure of such approaches due to epistasis.To overcome this, problem-specific information is added in a variety of ways, some of which are designed to increase the number of feasible solutions found whilst others are intended to improve the quality of such solutions.As well as a theoretical discussion as to the underlying reasons for using each operator,extensive computational experiments are carried out on a variety of data.These show that the indirect approach relies less on problem structure and hence is easier to implement and superior in solution quality.

  9. DNABIT Compress - Genome compression algorithm.

    PubMed

    Rajarajeswari, Pothuraju; Apparao, Allam

    2011-01-22

    Data compression is concerned with how information is organized in data. Efficient storage means removal of redundancy from the data being stored in the DNA molecule. Data compression algorithms remove redundancy and are used to understand biologically important molecules. We present a compression algorithm, "DNABIT Compress" for DNA sequences based on a novel algorithm of assigning binary bits for smaller segments of DNA bases to compress both repetitive and non repetitive DNA sequence. Our proposed algorithm achieves the best compression ratio for DNA sequences for larger genome. Significantly better compression results show that "DNABIT Compress" algorithm is the best among the remaining compression algorithms. While achieving the best compression ratios for DNA sequences (Genomes),our new DNABIT Compress algorithm significantly improves the running time of all previous DNA compression programs. Assigning binary bits (Unique BIT CODE) for (Exact Repeats, Reverse Repeats) fragments of DNA sequence is also a unique concept introduced in this algorithm for the first time in DNA compression. This proposed new algorithm could achieve the best compression ratio as much as 1.58 bits/bases where the existing best methods could not achieve a ratio less than 1.72 bits/bases.

  10. Self-adapting root-MUSIC algorithm and its real-valued formulation for acoustic vector sensor array

    NASA Astrophysics Data System (ADS)

    Wang, Peng; Zhang, Guo-jun; Xue, Chen-yang; Zhang, Wen-dong; Xiong, Ji-jun

    2012-12-01

    In this paper, based on the root-MUSIC algorithm for acoustic pressure sensor array, a new self-adapting root-MUSIC algorithm for acoustic vector sensor array is proposed by self-adaptive selecting the lead orientation vector, and its real-valued formulation by Forward-Backward(FB) smoothing and real-valued inverse covariance matrix is also proposed, which can reduce the computational complexity and distinguish the coherent signals. The simulation experiment results show the better performance of two new algorithm with low Signal-to-Noise (SNR) in direction of arrival (DOA) estimation than traditional MUSIC algorithm, and the experiment results using MEMS vector hydrophone array in lake trails show the engineering practicability of two new algorithms.

  11. National Orange Show Photovoltaic Demonstration

    SciTech Connect

    Dan Jimenez Sheri Raborn, CPA; Tom Baker

    2008-03-31

    National Orange Show Photovoltaic Demonstration created a 400KW Photovoltaic self-generation plant at the National Orange Show Events Center (NOS). The NOS owns a 120-acre state fairground where it operates an events center and produces an annual citrus fair known as the Orange Show. The NOS governing board wanted to employ cost-saving programs for annual energy expenses. It is hoped the Photovoltaic program will result in overall savings for the NOS, help reduce the State's energy demands as relating to electrical power consumption, improve quality of life within the affected grid area as well as increase the energy efficiency of buildings at our venue. In addition, the potential to reduce operational expenses would have a tremendous effect on the ability of the NOS to service its community.

  12. A consistent-mode indicator for the eigensystem realization algorithm

    NASA Technical Reports Server (NTRS)

    Pappa, Richard S.; Elliott, Kenny B.; Schenk, Axel

    1992-01-01

    A new method is described for assessing the consistency of model parameters identified with the Eigensystem Realization Algorithm (ERA). Identification results show varying consistency in practice due to many sources, including high modal density, nonlinearity, and inadequate excitation. Consistency is considered to be a reliable indicator of accuracy. The new method is the culmination of many years of experience in developing a practical implementation of the Eigensystem Realization Algorithm. The effectiveness of the method is illustrated using data from NASA Langley's Controls-Structures-Interaction Evolutionary Model.

  13. Algorithm and implementation of GPS/VRS network RTK

    NASA Astrophysics Data System (ADS)

    Gao, Chengfa; Yuan, Benyin; Ke, Fuyang; Pan, Shuguo

    2009-06-01

    This paper presents a virtual reference station method and its application. Details of how to generate GPS virtual phase observation are discussed in depth. The developed algorithms are successfully applied to the independent development network digital land investigation system. Experiments are carried out to investigate the system's performance whose results show that the algorithms have good availability and stability. The resulted accuracy of the VRS/RTK positioning was found to be within +/-3.3cm in the horizontal component and +/-7.9cm in the vertical component, which meets the requirements of precise digital land investigation.

  14. Convergence Results on Iteration Algorithms to Linear Systems

    PubMed Central

    Wang, Zhuande; Yang, Chuansheng; Yuan, Yubo

    2014-01-01

    In order to solve the large scale linear systems, backward and Jacobi iteration algorithms are employed. The convergence is the most important issue. In this paper, a unified backward iterative matrix is proposed. It shows that some well-known iterative algorithms can be deduced with it. The most important result is that the convergence results have been proved. Firstly, the spectral radius of the Jacobi iterative matrix is positive and the one of backward iterative matrix is strongly positive (lager than a positive constant). Secondly, the mentioned two iterations have the same convergence results (convergence or divergence simultaneously). Finally, some numerical experiments show that the proposed algorithms are correct and have the merit of backward methods. PMID:24991640

  15. Analysis of multigrid algorithms for nonsymmetric and indefinite elliptic problems

    SciTech Connect

    Bramble, J.H.; Pasciak, J.E.; Xu, J.

    1988-10-01

    We prove some new estimates for the convergence of multigrid algorithms applied to nonsymmetric and indefinite elliptic boundary value problems. We provide results for the so-called 'symmetric' multigrid schemes. We show that for the variable V-script-cycle and the W-script-cycle schemes, multigrid algorithms with any amount of smoothing on the finest grid converge at a rate that is independent of the number of levels or unknowns, provided that the initial grid is sufficiently fine. We show that the V-script-cycle algorithm also converges (under appropriate assumptions on the coarsest grid) but at a rate which may deteriorate as the number of levels increases. This deterioration for the V-script-cycle may occur even in the case of full elliptic regularity. Finally, the results of numerical experiments are given which illustrate the convergence behavior suggested by the theory.

  16. The Application of Baum-Welch Algorithm in Multistep Attack

    PubMed Central

    Zhang, Yanxue; Zhao, Dongmei; Liu, Jinxing

    2014-01-01

    The biggest difficulty of hidden Markov model applied to multistep attack is the determination of observations. Now the research of the determination of observations is still lacking, and it shows a certain degree of subjectivity. In this regard, we integrate the attack intentions and hidden Markov model (HMM) and support a method to forecasting multistep attack based on hidden Markov model. Firstly, we train the existing hidden Markov model(s) by the Baum-Welch algorithm of HMM. Then we recognize the alert belonging to attack scenarios with the Forward algorithm of HMM. Finally, we forecast the next possible attack sequence with the Viterbi algorithm of HMM. The results of simulation experiments show that the hidden Markov models which have been trained are better than the untrained in recognition and prediction. PMID:24991642

  17. Error analysis of the de-crosstalk algorithm for the multianode-PMT-based quadrant tracking sensor.

    PubMed

    Ma, Xiaoyu; Rao, Changhui; Wei, Kai; Guo, Youming; Rao, Xuejun

    2012-12-31

    For the multianode-PMT-based quadrant tracking sensor, one of the tracking error sources is the crosstalk. The crosstalk can be reduced by de-crosstalk algorithm, so the tracking error of the de-crosstalk algorithm for the multianode-PMT-based quadrant tracking sensor are analyzed in theory and verified by experiments. Both the theoretical analysis and the experimental results showed that the spot displacement sensitivity could be improved by the de-crosstalk algorithm, but the spot centroid detecting error increased at the same time. So the de-crosstalk algorithm could not improve the tracking accuracy effectively.

  18. Phyllodes tumor showing intraductal growth.

    PubMed

    Makidono, Akari; Tsunoda, Hiroko; Mori, Miki; Yagata, Hiroshi; Onoda, Yui; Kikuchi, Mari; Nozaki, Taiki; Saida, Yukihisa; Nakamura, Seigo; Suzuki, Koyu

    2013-07-01

    Phyllodes tumor of the breast is a rare fibroepithelial lesion and particularly uncommon in adolescent girls. It is thought to arise from the periductal rather than intralobular stroma. Usually, it is seen as a well-defined mass. Phyllodes tumor showing intraductal growth is extremely rare. Here we report a girl who has a phyllodes tumor with intraductal growth.

  19. Higher-order force gradient symplectic algorithms

    NASA Astrophysics Data System (ADS)

    Chin, Siu A.; Kidwell, Donald W.

    2000-12-01

    We show that a recently discovered fourth order symplectic algorithm, which requires one evaluation of force gradient in addition to three evaluations of the force, when iterated to higher order, yielded algorithms that are far superior to similarly iterated higher order algorithms based on the standard Forest-Ruth algorithm. We gauge the accuracy of each algorithm by comparing the step-size independent error functions associated with energy conservation and the rotation of the Laplace-Runge-Lenz vector when solving a highly eccentric Kepler problem. For orders 6, 8, 10, and 12, the new algorithms are approximately a factor of 103, 104, 104, and 105 better.

  20. Why is Boris Algorithm So Good?

    SciTech Connect

    et al, Hong Qin

    2013-03-03

    Due to its excellent long term accuracy, the Boris algorithm is the de facto standard for advancing a charged particle. Despite its popularity, up to now there has been no convincing explanation why the Boris algorithm has this advantageous feature. In this letter, we provide an answer to this question. We show that the Boris algorithm conserves phase space volume, even though it is not symplectic. The global bound on energy error typically associated with symplectic algorithms still holds for the Boris algorithm, making it an effective algorithm for the multi-scale dynamics of plasmas.

  1. Research on Routing Selection Algorithm Based on Genetic Algorithm

    NASA Astrophysics Data System (ADS)

    Gao, Guohong; Zhang, Baojian; Li, Xueyong; Lv, Jinna

    The hereditary algorithm is a kind of random searching and method of optimizing based on living beings natural selection and hereditary mechanism. In recent years, because of the potentiality in solving complicate problems and the successful application in the fields of industrial project, hereditary algorithm has been widely concerned by the domestic and international scholar. Routing Selection communication has been defined a standard communication model of IP version 6.This paper proposes a service model of Routing Selection communication, and designs and implements a new Routing Selection algorithm based on genetic algorithm.The experimental simulation results show that this algorithm can get more resolution at less time and more balanced network load, which enhances search ratio and the availability of network resource, and improves the quality of service.

  2. A statistical-based scheduling algorithm in automated data path synthesis

    NASA Technical Reports Server (NTRS)

    Jeon, Byung Wook; Lursinsap, Chidchanok

    1992-01-01

    In this paper, we propose a new heuristic scheduling algorithm based on the statistical analysis of the cumulative frequency distribution of operations among control steps. It has a tendency of escaping from local minima and therefore reaching a globally optimal solution. The presented algorithm considers the real world constraints such as chained operations, multicycle operations, and pipelined data paths. The result of the experiment shows that it gives optimal solutions, even though it is greedy in nature.

  3. Magic Carpet Shows Its Colors

    NASA Technical Reports Server (NTRS)

    2004-01-01

    The upper left image in this display is from the panoramic camera on the Mars Exploration Rover Spirit, showing the 'Magic Carpet' region near the rover at Gusev Crater, Mars, on Sol 7, the seventh martian day of its journey (Jan. 10, 2004). The lower image, also from the panoramic camera, is a monochrome (single filter) image of a rock in the 'Magic Carpet' area. Note that colored portions of the rock correlate with extracted spectra shown in the plot to the side. Four different types of materials are shown: the rock itself, the soil in front of the rock, some brighter soil on top of the rock, and some dust that has collected in small recesses on the rock face ('spots'). Each color on the spectra matches a line on the graph, showing how the panoramic camera's different colored filters are used to broadly assess the varying mineral compositions of martian rocks and soils.

  4. Measurement of the Top Quark Mass using Dilepton Events and a Neutrino Weighting Algorithm with the D0 Experiment at the Tevatron (Run II)

    SciTech Connect

    Meyer, Joerg

    2007-01-01

    measurement of the top quark mass by the D0 experiment at Fermilab in the dilepton final states. The comparison of the measured top quark masses in different final states allows an important consistency check of the Standard Model. Inconsistent results would be a clear hint of a misinterpretation of the analyzed data set. With the exception of the Higgs boson, all particles predicted by the Standard Model have been found. The search for the Higgs boson is one of the main focuses in high energy physics. The theory section will discuss the close relationship between the physics of the Higgs boson and the top quark.

  5. Development of novel algorithm and real-time monitoring ambulatory system using Bluetooth module for fall detection in the elderly.

    PubMed

    Hwang, J Y; Kang, J M; Jang, Y W; Kim, H

    2004-01-01

    Novel algorithm and real-time ambulatory monitoring system for fall detection in elderly people is described. Our system is comprised of accelerometer, tilt sensor and gyroscope. For real-time monitoring, we used Bluetooth. Accelerometer measures kinetic force, tilt sensor and gyroscope estimates body posture. Also, we suggested algorithm using signals which obtained from the system attached to the chest for fall detection. To evaluate our system and algorithm, we experimented on three people aged over 26 years. The experiment of four cases such as forward fall, backward fall, side fall and sit-stand was repeated ten times and the experiment in daily life activity was performed one time to each subject. These experiments showed that our system and algorithm could distinguish between falling and daily life activity. Moreover, the accuracy of fall detection is 96.7%. Our system is especially adapted for long-time and real-time ambulatory monitoring of elderly people in emergency situation.

  6. Comprehensive eye evaluation algorithm

    NASA Astrophysics Data System (ADS)

    Agurto, C.; Nemeth, S.; Zamora, G.; Vahtel, M.; Soliz, P.; Barriga, S.

    2016-03-01

    In recent years, several research groups have developed automatic algorithms to detect diabetic retinopathy (DR) in individuals with diabetes (DM), using digital retinal images. Studies have indicated that diabetics have 1.5 times the annual risk of developing primary open angle glaucoma (POAG) as do people without DM. Moreover, DM patients have 1.8 times the risk for age-related macular degeneration (AMD). Although numerous investigators are developing automatic DR detection algorithms, there have been few successful efforts to create an automatic algorithm that can detect other ocular diseases, such as POAG and AMD. Consequently, our aim in the current study was to develop a comprehensive eye evaluation algorithm that not only detects DR in retinal images, but also automatically identifies glaucoma suspects and AMD by integrating other personal medical information with the retinal features. The proposed system is fully automatic and provides the likelihood of each of the three eye disease. The system was evaluated in two datasets of 104 and 88 diabetic cases. For each eye, we used two non-mydriatic digital color fundus photographs (macula and optic disc centered) and, when available, information about age, duration of diabetes, cataracts, hypertension, gender, and laboratory data. Our results show that the combination of multimodal features can increase the AUC by up to 5%, 7%, and 8% in the detection of AMD, DR, and glaucoma respectively. Marked improvement was achieved when laboratory results were combined with retinal image features.

  7. ARPANET Routing Algorithm Improvements

    DTIC Science & Technology

    1978-10-01

    IMPROVEMENTS . .PFOnINI ORG. REPORT MUNDER -- ) _ .. .... 3940 7, AUT񓂏(c) .. .. .. CONTRACT Of GRANT NUMSlet e) SJ. M. /Mc~uillan E. C./Rosen I...8217), this problem may persist for a very long time, causing extremely bad performance throughout the whole network (for instance, if w’ reports that one of...algorithm may naturally tend to oscillate between bad routing paths and become itself a major contributor to network congestion. These examples show

  8. SIMAS ADM XBT Algorithm

    DTIC Science & Technology

    2016-06-07

    XBT’s sound speed values instead of temperature values. Studies show that the sound speed at the surface in a specific location varies less than...be entered at the terminal in metric or English temperatures or sound speeds. The algorithm automatically determines which form each data point was... sound speeds. Leroy’s equation is used to derive sound speed from temperature or temperature from sound speed. The previous, current, and next months

  9. "Medicine show." Alice in Doctorland.

    PubMed

    1987-01-01

    This is an excerpt from the script of a 1939 play provided to the Institute of Social Medicine and Community Health by the Library of Congress Federal Theater Project Collection at George Mason University Library, Fairfax, Virginia, pages 2-1-8 thru 2-1-14. The Federal Theatre Project (FTP) was part of the New Deal program for the arts 1935-1939. Funded by the Works Progress Administration (WPA) its goal was to employ theater professionals from the relief rolls. A number of FTP plays deal with aspects of medicine and public health. Pageants, puppet shows and documentary plays celebrated progress in medical science while examining social controversies in medical services and the public health movement. "Medicine Show" sharply contrasts technological wonders with social backwardness. The play was rehearsed by the FTP but never opened because funding ended. A revised version ran on Broadway in 1940. The preceding comments are adapted from an excellent, well-illustrated review of five of these plays by Barabara Melosh: "The New Deal's Federal Theatre Project," Medical Heritage, Vol. 2, No. 1 (Jan/Feb 1986), pp. 36-47.

  10. Algorithmic chemistry

    SciTech Connect

    Fontana, W.

    1990-12-13

    In this paper complex adaptive systems are defined by a self- referential loop in which objects encode functions that act back on these objects. A model for this loop is presented. It uses a simple recursive formal language, derived from the lambda-calculus, to provide a semantics that maps character strings into functions that manipulate symbols on strings. The interaction between two functions, or algorithms, is defined naturally within the language through function composition, and results in the production of a new function. An iterated map acting on sets of functions and a corresponding graph representation are defined. Their properties are useful to discuss the behavior of a fixed size ensemble of randomly interacting functions. This function gas'', or Turning gas'', is studied under various conditions, and evolves cooperative interaction patterns of considerable intricacy. These patterns adapt under the influence of perturbations consisting in the addition of new random functions to the system. Different organizations emerge depending on the availability of self-replicators.

  11. APL simulation of Grover's algorithm

    NASA Astrophysics Data System (ADS)

    Lipovaca, Samir

    2012-02-01

    Grover's algorithm is a fast quantum search algorithm. Classically, to solve the search problem for a search space of size N we need approximately N operations. Grover's algorithm offers a quadratic speedup. Since present quantum computers are not robust enough for code writing and execution, to experiment with Grover's algorithm, we will simulate it using the APL programming language. The APL programming language is especially suited for this task. For example, to compute Walsh-Hadamard transformation matrix for N quantum states via a tensor product of N Hadamard matrices we need to iterate N-1 times only one line of the code. Initial study indicates the quantum mechanical amplitude of the solution is almost independent of the search space size and rapidly reaches 0.999 values with slight variations at higher decimal places.

  12. Linear Bregman algorithm implemented in parallel GPU

    NASA Astrophysics Data System (ADS)

    Li, Pengyan; Ke, Jue; Sui, Dong; Wei, Ping

    2015-08-01

    At present, most compressed sensing (CS) algorithms have poor converging speed, thus are difficult to run on PC. To deal with this issue, we use a parallel GPU, to implement a broadly used compressed sensing algorithm, the Linear Bregman algorithm. Linear iterative Bregman algorithm is a reconstruction algorithm proposed by Osher and Cai. Compared with other CS reconstruction algorithms, the linear Bregman algorithm only involves the vector and matrix multiplication and thresholding operation, and is simpler and more efficient for programming. We use C as a development language and adopt CUDA (Compute Unified Device Architecture) as parallel computing architectures. In this paper, we compared the parallel Bregman algorithm with traditional CPU realized Bregaman algorithm. In addition, we also compared the parallel Bregman algorithm with other CS reconstruction algorithms, such as OMP and TwIST algorithms. Compared with these two algorithms, the result of this paper shows that, the parallel Bregman algorithm needs shorter time, and thus is more convenient for real-time object reconstruction, which is important to people's fast growing demand to information technology.

  13. "Show me" bioethics and politics.

    PubMed

    Christopher, Myra J

    2007-10-01

    Missouri, the "Show Me State," has become the epicenter of several important national public policy debates, including abortion rights, the right to choose and refuse medical treatment, and, most recently, early stem cell research. In this environment, the Center for Practical Bioethics (formerly, Midwest Bioethics Center) emerged and grew. The Center's role in these "cultural wars" is not to advocate for a particular position but to provide well researched and objective information, perspective, and advocacy for the ethical justification of policy positions; and to serve as a neutral convener and provider of a public forum for discussion. In this article, the Center's work on early stem cell research is a case study through which to argue that not only the Center, but also the field of bioethics has a critical role in the politics of public health policy.

  14. Phoenix Scoop Inverted Showing Rasp

    NASA Technical Reports Server (NTRS)

    2008-01-01

    This image taken by the Surface Stereo Imager on Sol 49, or the 49th Martian day of the mission (July 14, 2008), shows the silver colored rasp protruding from NASA's Phoenix Mars Lander's Robotic Arm scoop. The scoop is inverted and the rasp is pointing up.

    Shown with its forks pointing toward the ground is the thermal and electrical conductivity probe, at the lower right. The Robotic Arm Camera is pointed toward the ground.

    The Phoenix Mission is led by the University of Arizona, Tucson, on behalf of NASA. Project management of the mission is led by NASA's Jet Propulsion Laboratory, Pasadena, Calif. Spacecraft development is by Lockheed Martin Space Systems, Denver.

  15. ShowMe3D

    SciTech Connect

    Sinclair, Michael B

    2012-01-05

    ShowMe3D is a data visualization graphical user interface specifically designed for use with hyperspectral image obtained from the Hyperspectral Confocal Microscope. The program allows the user to select and display any single image from a three dimensional hyperspectral image stack. By moving a slider control, the user can easily move between images of the stack. The user can zoom into any region of the image. The user can select any pixel or region from the displayed image and display the fluorescence spectrum associated with that pixel or region. The user can define up to 3 spectral filters to apply to the hyperspectral image and view the image as it would appear from a filter-based confocal microscope. The user can also obtain statistics such as intensity average and variance from selected regions.

  16. A combined reconstruction algorithm for computerized ionospheric tomography

    NASA Astrophysics Data System (ADS)

    Wen, D. B.; Ou, J. K.; Yuan, Y. B.

    Ionospheric electron density profiles inverted by tomographic reconstruction of GPS derived total electron content TEC measurements has the potential to become a tool to quantify ionospheric variability and investigate ionospheric dynamics The problem of reconstructing ionospheric electron density from GPS receiver to satellite TEC measurements are formulated as an ill-posed discrete linear inverse problem A combined reconstruction algorithm of computerized ionospheric tomography CIT is proposed in this paper In this algorithm Tikhonov regularization theory TRT is exploited to solve the ill-posed problem and its estimate from GPS observation data is input as the initial guess of simultaneous iterative reconstruction algorithm SIRT The combined algorithm offer a more reasonable method to choose initial guess of SIRT and the use of SIRT algorithm is to improve the quality of the final reconstructed imaging Numerical experiments from the actual GPS observation data are used to validate the reliability of the method the reconstructed results show that the new algorithm works reasonably and effectively with CIT the overall reconstruction error reduces significantly compared to the reconstruction error of SIRT only or TRT only

  17. Adaptive bad pixel correction algorithm for IRFPA based on PCNN

    NASA Astrophysics Data System (ADS)

    Leng, Hanbing; Zhou, Zuofeng; Cao, Jianzhong; Yi, Bo; Yan, Aqi; Zhang, Jian

    2013-10-01

    Bad pixels and response non-uniformity are the primary obstacles when IRFPA is used in different thermal imaging systems. The bad pixels of IRFPA include fixed bad pixels and random bad pixels. The former is caused by material or manufacture defect and their positions are always fixed, the latter is caused by temperature drift and their positions are always changing. Traditional radiometric calibration-based bad pixel detection and compensation algorithm is only valid to the fixed bad pixels. Scene-based bad pixel correction algorithm is the effective way to eliminate these two kinds of bad pixels. Currently, the most used scene-based bad pixel correction algorithm is based on adaptive median filter (AMF). In this algorithm, bad pixels are regarded as image noise and then be replaced by filtered value. However, missed correction and false correction often happens when AMF is used to handle complex infrared scenes. To solve this problem, a new adaptive bad pixel correction algorithm based on pulse coupled neural networks (PCNN) is proposed. Potential bad pixels are detected by PCNN in the first step, then image sequences are used periodically to confirm the real bad pixels and exclude the false one, finally bad pixels are replaced by the filtered result. With the real infrared images obtained from a camera, the experiment results show the effectiveness of the proposed algorithm.

  18. Scalable Virtual Network Mapping Algorithm for Internet-Scale Networks

    NASA Astrophysics Data System (ADS)

    Yang, Qiang; Wu, Chunming; Zhang, Min

    The proper allocation of network resources from a common physical substrate to a set of virtual networks (VNs) is one of the key technical challenges of network virtualization. While a variety of state-of-the-art algorithms have been proposed in an attempt to address this issue from different facets, the challenge still remains in the context of large-scale networks as the existing solutions mainly perform in a centralized manner which requires maintaining the overall and up-to-date information of the underlying substrate network. This implies the restricted scalability and computational efficiency when the network scale becomes large. This paper tackles the virtual network mapping problem and proposes a novel hierarchical algorithm in conjunction with a substrate network decomposition approach. By appropriately transforming the underlying substrate network into a collection of sub-networks, the hierarchical virtual network mapping algorithm can be carried out through a global virtual network mapping algorithm (GVNMA) and a local virtual network mapping algorithm (LVNMA) operated in the network central server and within individual sub-networks respectively with their cooperation and coordination as necessary. The proposed algorithm is assessed against the centralized approaches through a set of numerical simulation experiments for a range of network scenarios. The results show that the proposed hierarchical approach can be about 5-20 times faster for VN mapping tasks than conventional centralized approaches with acceptable communication overhead between GVNCA and LVNCA for all examined networks, whilst performs almost as well as the centralized solutions.

  19. Long-term ELBARA-II Assistance to SMOS Land Product and Algorithm Validation at the Valencia Anchor Station (MELBEX Experiment 2010-2013)

    NASA Astrophysics Data System (ADS)

    Lopez-Baeza, Ernesto; Wigneron, Jean-Pierre; Schwank, Mike; Miernecki, Maciej; Kerr, Yann; Casal, Tania; Delwart, Steven; Fernandez-Moran, Roberto; Mecklenburg, Susanne; Coll Pajaron, M. Amparo; Salgado Hernanz, Paula

    The main activity of the Valencia Anchor Station (VAS) is currently now to support the validation of SMOS (Soil Moisture and Ocean Salinity) Level 2 and 3 land products (soil moisture, SM, and vegetation optical depth, TAU). With this aim, the European Space Agency (ESA) has provided the Climatology from Satellites Group of the University of Valencia with an ELBARA-II microwave radiometer under a loan agreement since September 2009. During this time, brightness temperatures (TB) have continuously been acquired, except during normal maintenance or minor repair interruptions. ELBARA-II is an L-band dual-polarization radiometer with two channels (1400-1418 MHz, 1409-1427 MHz). It is continuously measuring over a vineyard field (El Renegado, Caudete de las Fuentes, Valencia) from a 15 m platform with a constant protocol for calibration and angular scanning measurements with the aim to assisting the validation of SMOS land products and the calibration of the L-MEB (L-Band Emission of the Biosphere) -basis for the SMOS Level 2 Land Processor- over the VAS validation site. One of the advantages of using the VAS site is the possibility of studying two different environmental conditions along the year. While the vine cycle extends mainly between April and October, during the rest of the year the area remains under bare soil conditions, adequate for the calibration of the soil model. The measurement protocol currently running has shown to be robust during the whole operation time and will be extended in time as much as possible to continue providing a long-term data set of ELBARA-II TB measurements and retrieved SM and TAU. This data set is also showing to be useful in support of SMOS scientific activities: the VAS area and, specifically the ELBARA-II site, offer good conditions to control the long-term evolution of SMOS Level 2 and Level 3 land products and interpret eventual anomalies that may obscure sensor hidden biases. In addition, SM and TAU that are currently

  20. Benchmarking monthly homogenization algorithms

    NASA Astrophysics Data System (ADS)

    Venema, V. K. C.; Mestre, O.; Aguilar, E.; Auer, I.; Guijarro, J. A.; Domonkos, P.; Vertacnik, G.; Szentimrey, T.; Stepanek, P.; Zahradnicek, P.; Viarre, J.; Müller-Westermeier, G.; Lakatos, M.; Williams, C. N.; Menne, M.; Lindau, R.; Rasol, D.; Rustemeier, E.; Kolokythas, K.; Marinova, T.; Andresen, L.; Acquaotta, F.; Fratianni, S.; Cheval, S.; Klancar, M.; Brunetti, M.; Gruber, C.; Prohom Duran, M.; Likso, T.; Esteban, P.; Brandsma, T.

    2011-08-01

    . Training was found to be very important. Moreover, state-of-the-art relative homogenization algorithms developed to work with an inhomogeneous reference are shown to perform best. The study showed that currently automatic algorithms can perform as well as manual ones.

  1. Block clustering based on difference of convex functions (DC) programming and DC algorithms.

    PubMed

    Le, Hoai Minh; Le Thi, Hoai An; Dinh, Tao Pham; Huynh, Van Ngai

    2013-10-01

    We investigate difference of convex functions (DC) programming and the DC algorithm (DCA) to solve the block clustering problem in the continuous framework, which traditionally requires solving a hard combinatorial optimization problem. DC reformulation techniques and exact penalty in DC programming are developed to build an appropriate equivalent DC program of the block clustering problem. They lead to an elegant and explicit DCA scheme for the resulting DC program. Computational experiments show the robustness and efficiency of the proposed algorithm and its superiority over standard algorithms such as two-mode K-means, two-mode fuzzy clustering, and block classification EM.

  2. An infrared salient object stereo matching algorithm based on epipolar rectification

    NASA Astrophysics Data System (ADS)

    Zhang, Yi; Wu, Lei; Han, Jing; Bai, Lian-fa

    2016-02-01

    Due to the higher noise and less details in infrared images, general matching algorithms are prone to obtaining unsatisfying results. Combining the idea of salient object, we propose a novel infrared stereo matching algorithm which applies to unconstrained stereo rigs. Firstly, we present an epipolar rectification method introducing particle swarm optimization and K-nearest neighbor to deal with the problem of epipolar constraint. Then we make use of transition region to extract salient object in the rectified infrared image pairs. Finally, disparity map is generated by matching salient regions. Experiments show that our algorithm deals with the infrared stereo matching of unconstrained stereo rigs with better accuracy and higher speed.

  3. Research on Loran-C Sky Wave Delay Estimation Using Eigen-decomposition Algorithm

    NASA Astrophysics Data System (ADS)

    Xiong, W.; Hu, Y. H.; Liang, Q.

    2009-04-01

    A novel signal processing technique using the Eigenvector algorithm for estimating sky wave delays in Loran - C receiver has been presented in this paper. This provides the basis on which to design a Loran-C receiver capable of adjusting its sampling point adaptively to the optimal value. The performance of this sky wave delay on the estimation accuracy of the algorithm is studied and compared with IFFT technique. Simulation results show that this algorithm clearly provides better resolution and sharper peaks than the IFFT. Finally, experiment results using off-air data confirm these conclusions.

  4. Swarm intelligence for atmospheric compensation in free space optical communication-Modified shuffled frog leaping algorithm

    NASA Astrophysics Data System (ADS)

    Li, Zhaokun; Cao, Jingtai; Zhao, Xiaohui; Liu, Wei

    2015-03-01

    A conventional adaptive optics (AO) system is widely used to compensate atmospheric turbulence in free space optical (FSO) communication systems, but wavefront measurements based on phase-conjugation principle are not desired under strong scintillation circumstances. In this study we propose a novel swarm intelligence optimization algorithm, which is called modified shuffled frog leaping algorithm (MSFL), to compensate the wavefront aberration. Simulation and experiments results show that MSFL algorithm performs well in the atmospheric compensation and it can increase the coupling efficiency in receiver terminal and significantly improve the performance of the FSO communication systems.

  5. A fast and accurate algorithm for ℓ 1 minimization problems in compressive sampling

    NASA Astrophysics Data System (ADS)

    Chen, Feishe; Shen, Lixin; Suter, Bruce W.; Xu, Yuesheng

    2015-12-01

    An accurate and efficient algorithm for solving the constrained ℓ 1-norm minimization problem is highly needed and is crucial for the success of sparse signal recovery in compressive sampling. We tackle the constrained ℓ 1-norm minimization problem by reformulating it via an indicator function which describes the constraints. The resulting model is solved efficiently and accurately by using an elegant proximity operator-based algorithm. Numerical experiments show that the proposed algorithm performs well for sparse signals with magnitudes over a high dynamic range. Furthermore, it performs significantly better than the well-known algorithm NESTA (a shorthand for Nesterov's algorithm) and DADM (dual alternating direction method) in terms of the quality of restored signals and the computational complexity measured in the CPU-time consumed.

  6. An infrared maritime target detection algorithm applicable to heavy sea fog

    NASA Astrophysics Data System (ADS)

    Wang, Bin; Dong, Lili; Zhao, Ming; Wu, Houde; Ji, Yuanyuan; Xu, Wenhai

    2015-07-01

    Infrared maritime images taken in heavy sea fog (HSF) are usually nonuniform in brightness distribution and targets in different region have significant differences in local contrast, which will cause great difficulty for normal target detection algorithm to remove background clutters and extract targets. To this problem, this paper proposes a new target detection algorithm based on image region division and wavelet inter-subband correlation. This algorithm will firstly divide the original image into different regions by an adaptive thresholding method OTSU. Then, wavelet threshold denoising is adopted to suppress noise in subbands. Finally, the real target is extracted according to its inter-subband correlation and local singularity in original image. Experiment results show that this algorithm can overcome the brightness nonuniformity and background clutters to extract all targets accurately. Besides, target's area is well retained. So the proposed algorithm has high practical value in maritime target search based on infrared imaging system.

  7. Greedy heuristic algorithm for solving series of eee components classification problems*

    NASA Astrophysics Data System (ADS)

    Kazakovtsev, A. L.; Antamoshkin, A. N.; Fedosov, V. V.

    2016-04-01

    Algorithms based on using the agglomerative greedy heuristics demonstrate precise and stable results for clustering problems based on k- means and p-median models. Such algorithms are successfully implemented in the processes of production of specialized EEE components for using in space systems which include testing each EEE device and detection of homogeneous production batches of the EEE components based on results of the tests using p-median models. In this paper, authors propose a new version of the genetic algorithm with the greedy agglomerative heuristic which allows solving series of problems. Such algorithm is useful for solving the k-means and p-median clustering problems when the number of clusters is unknown. Computational experiments on real data show that the preciseness of the result decreases insignificantly in comparison with the initial genetic algorithm for solving a single problem.

  8. Adaptive optics image restoration algorithm based on wavefront reconstruction and adaptive total variation method

    NASA Astrophysics Data System (ADS)

    Li, Dongming; Zhang, Lijuan; Wang, Ting; Liu, Huan; Yang, Jinhua; Chen, Guifen

    2016-11-01

    To improve the adaptive optics (AO) image's quality, we study the AO image restoration algorithm based on wavefront reconstruction technology and adaptive total variation (TV) method in this paper. Firstly, the wavefront reconstruction using Zernike polynomial is used for initial estimated for the point spread function (PSF). Then, we develop our proposed iterative solutions for AO images restoration, addressing the joint deconvolution issue. The image restoration experiments are performed to verify the image restoration effect of our proposed algorithm. The experimental results show that, compared with the RL-IBD algorithm and Wiener-IBD algorithm, we can see that GMG measures (for real AO image) from our algorithm are increased by 36.92%, and 27.44% respectively, and the computation time are decreased by 7.2%, and 3.4% respectively, and its estimation accuracy is significantly improved.

  9. A Community Detection Algorithm Based on Topology Potential and Spectral Clustering

    PubMed Central

    Wang, Zhixiao; Chen, Zhaotong; Zhao, Ya; Chen, Shaoda

    2014-01-01

    Community detection is of great value for complex networks in understanding their inherent law and predicting their behavior. Spectral clustering algorithms have been successfully applied in community detection. This kind of methods has two inadequacies: one is that the input matrixes they used cannot provide sufficient structural information for community detection and the other is that they cannot necessarily derive the proper community number from the ladder distribution of eigenvector elements. In order to solve these problems, this paper puts forward a novel community detection algorithm based on topology potential and spectral clustering. The new algorithm constructs the normalized Laplacian matrix with nodes' topology potential, which contains rich structural information of the network. In addition, the new algorithm can automatically get the optimal community number from the local maximum potential nodes. Experiments results showed that the new algorithm gave excellent performance on artificial networks and real world networks and outperforms other community detection methods. PMID:25147846

  10. Multi-pattern string matching algorithms comparison for intrusion detection system

    NASA Astrophysics Data System (ADS)

    Hasan, Awsan A.; Rashid, Nur'Aini Abdul; Abdulrazzaq, Atheer A.

    2014-12-01

    Computer networks are developing exponentially and running at high speeds. With the increasing number of Internet users, computers have become the preferred target for complex attacks that require complex analyses to be detected. The Intrusion detection system (IDS) is created and turned into an important part of any modern network to protect the network from attacks. The IDS relies on string matching algorithms to identify network attacks, but these string matching algorithms consume a considerable amount of IDS processing time, thereby slows down the IDS performance. A new algorithm that can overcome the weakness of the IDS needs to be developed. Improving the multi-pattern matching algorithm ensure that an IDS can work properly and the limitations can be overcome. In this paper, we perform a comparison between our three multi-pattern matching algorithms; MP-KR, MPHQS and MPH-BMH with their corresponding original algorithms Kr, QS and BMH respectively. The experiments show that MPH-QS performs best among the proposed algorithms, followed by MPH-BMH, and MP-KR is the slowest. MPH-QS detects a large number of signature patterns in short time compared to other two algorithms. This finding can prove that the multi-pattern matching algorithms are more efficient in high-speed networks.

  11. A hybrid algorithm with GA and DAEM

    NASA Astrophysics Data System (ADS)

    Wan, HongJie; Deng, HaoJiang; Wang, XueWei

    2013-03-01

    Although the expectation-maximization (EM) algorithm has been widely used for finding maximum likelihood estimation of parameters in probabilistic models, it has the problem of trapping by local maxima. To overcome this problem, the deterministic annealing EM (DAEM) algorithm was once proposed and had achieved better performance than EM algorithm, but it is not very effective at avoiding local maxima. In this paper, a solution is proposed by integrating GA and DAEM into one procedure to further improve the solution quality. The population based search of genetic algorithm will produce different solutions and thus can increase the search space of DAEM. Therefore, the proposed algorithm will reach better solution than just using DAEM. The algorithm retains the property of DAEM and gets the better solution by genetic operation. Experiment results on Gaussian mixture model parameter estimation demonstrate that the proposed algorithm can achieve better performance.

  12. Dynamic Shortest Path Algorithms for Hypergraphs

    DTIC Science & Technology

    2012-01-01

    geometric hypergraphs and the Enron email data set. The latter illustrates the application of the proposed algorithms in social networks for identifying...analyze the time complexity of the proposed algorithms and perform simulation experiments for both random geometric hypergraphs and the Enron email data...geometric hypergraph model and a real data set of a social network ( Enron email data set), we study the average performance of these two algorithms in

  13. A Novel Histogram Region Merging Based Multithreshold Segmentation Algorithm for MR Brain Images

    PubMed Central

    Shen, Xuanjing; Feng, Yuncong

    2017-01-01

    Multithreshold segmentation algorithm is time-consuming, and the time complexity will increase exponentially with the increase of thresholds. In order to reduce the time complexity, a novel multithreshold segmentation algorithm is proposed in this paper. First, all gray levels are used as thresholds, so the histogram of the original image is divided into 256 small regions, and each region corresponds to one gray level. Then, two adjacent regions are merged in each iteration by a new designed scheme, and a threshold is removed each time. To improve the accuracy of the merger operation, variance and probability are used as energy. No matter how many the thresholds are, the time complexity of the algorithm is stable at O(L). Finally, the experiment is conducted on many MR brain images to verify the performance of the proposed algorithm. Experiment results show that our method can reduce the running time effectively and obtain segmentation results with high accuracy.

  14. An Iterative CT Reconstruction Algorithm for Fast Fluid Flow Imaging.

    PubMed

    Van Eyndhoven, Geert; Batenburg, K Joost; Kazantsev, Daniil; Van Nieuwenhove, Vincent; Lee, Peter D; Dobson, Katherine J; Sijbers, Jan

    2015-11-01

    The study of fluid flow through solid matter by computed tomography (CT) imaging has many applications, ranging from petroleum and aquifer engineering to biomedical, manufacturing, and environmental research. To avoid motion artifacts, current experiments are often limited to slow fluid flow dynamics. This severely limits the applicability of the technique. In this paper, a new iterative CT reconstruction algorithm for improved a temporal/spatial resolution in the imaging of fluid flow through solid matter is introduced. The proposed algorithm exploits prior knowledge in two ways. First, the time-varying object is assumed to consist of stationary (the solid matter) and dynamic regions (the fluid flow). Second, the attenuation curve of a particular voxel in the dynamic region is modeled by a piecewise constant function over time, which is in accordance with the actual advancing fluid/air boundary. Quantitative and qualitative results on different simulation experiments and a real neutron tomography data set show that, in comparison with the state-of-the-art algorithms, the proposed algorithm allows reconstruction from substantially fewer projections per rotation without image quality loss. Therefore, the temporal resolution can be substantially increased, and thus fluid flow experiments with faster dynamics can be performed.

  15. Mimas Showing False Colors #1

    NASA Technical Reports Server (NTRS)

    2005-01-01

    False color images of Saturn's moon, Mimas, reveal variation in either the composition or texture across its surface.

    During its approach to Mimas on Aug. 2, 2005, the Cassini spacecraft narrow-angle camera obtained multi-spectral views of the moon from a range of 228,000 kilometers (142,500 miles).

    The image at the left is a narrow angle clear-filter image, which was separately processed to enhance the contrast in brightness and sharpness of visible features. The image at the right is a color composite of narrow-angle ultraviolet, green, infrared and clear filter images, which have been specially processed to accentuate subtle changes in the spectral properties of Mimas' surface materials. To create this view, three color images (ultraviolet, green and infrared) were combined into a single black and white picture that isolates and maps regional color differences. This 'color map' was then superimposed over the clear-filter image at the left.

    The combination of color map and brightness image shows how the color differences across the Mimas surface materials are tied to geological features. Shades of blue and violet in the image at the right are used to identify surface materials that are bluer in color and have a weaker infrared brightness than average Mimas materials, which are represented by green.

    Herschel crater, a 140-kilometer-wide (88-mile) impact feature with a prominent central peak, is visible in the upper right of each image. The unusual bluer materials are seen to broadly surround Herschel crater. However, the bluer material is not uniformly distributed in and around the crater. Instead, it appears to be concentrated on the outside of the crater and more to the west than to the north or south. The origin of the color differences is not yet understood. It may represent ejecta material that was excavated from inside Mimas when the Herschel impact occurred. The bluer color of these materials may be caused by subtle differences in

  16. Design of Automatic Extraction Algorithm of Knowledge Points for MOOCs

    PubMed Central

    Chen, Haijian; Han, Dongmei; Dai, Yonghui; Zhao, Lina

    2015-01-01

    In recent years, Massive Open Online Courses (MOOCs) are very popular among college students and have a powerful impact on academic institutions. In the MOOCs environment, knowledge discovery and knowledge sharing are very important, which currently are often achieved by ontology techniques. In building ontology, automatic extraction technology is crucial. Because the general methods of text mining algorithm do not have obvious effect on online course, we designed automatic extracting course knowledge points (AECKP) algorithm for online course. It includes document classification, Chinese word segmentation, and POS tagging for each document. Vector Space Model (VSM) is used to calculate similarity and design the weight to optimize the TF-IDF algorithm output values, and the higher scores will be selected as knowledge points. Course documents of “C programming language” are selected for the experiment in this study. The results show that the proposed approach can achieve satisfactory accuracy rate and recall rate. PMID:26448738

  17. A novel spatial clustering algorithm based on Delaunay triangulation

    NASA Astrophysics Data System (ADS)

    Yang, Xiankun; Cui, Weihong

    2008-12-01

    Exploratory data analysis is increasingly more necessary as larger spatial data is managed in electro-magnetic media. Spatial clustering is one of the very important spatial data mining techniques. So far, a lot of spatial clustering algorithms have been proposed. In this paper we propose a robust spatial clustering algorithm named SCABDT (Spatial Clustering Algorithm Based on Delaunay Triangulation). SCABDT demonstrates important advantages over the previous works. First, it discovers even arbitrary shape of cluster distribution. Second, in order to execute SCABDT, we do not need to know any priori nature of distribution. Third, like DBSCAN, Experiments show that SCABDT does not require so much CPU processing time. Finally it handles efficiently outliers.

  18. Measuring Constraint-Set Utility for Partitional Clustering Algorithms

    NASA Technical Reports Server (NTRS)

    Davidson, Ian; Wagstaff, Kiri L.; Basu, Sugato

    2006-01-01

    Clustering with constraints is an active area of machine learning and data mining research. Previous empirical work has convincingly shown that adding constraints to clustering improves the performance of a variety of algorithms. However, in most of these experiments, results are averaged over different randomly chosen constraint sets from a given set of labels, thereby masking interesting properties of individual sets. We demonstrate that constraint sets vary significantly in how useful they are for constrained clustering; some constraint sets can actually decrease algorithm performance. We create two quantitative measures, informativeness and coherence, that can be used to identify useful constraint sets. We show that these measures can also help explain differences in performance for four particular constrained clustering algorithms.

  19. A self-tuning phase-shifting algorithm for interferometry.

    PubMed

    Estrada, Julio C; Servin, Manuel; Quiroga, Juan A

    2010-02-01

    In Phase Stepping Interferometry (PSI) an interferogram sequence having a known, and constant phase shift between the interferograms is required. Here we take the case where this constant phase shift is unknown and the only assumption is that the interferograms do have a temporal carrier. To recover the modulating phase from the interferograms, we propose a self-tuning phase-shifting algorithm. Our algorithm estimates the temporal frequency first, and then this knowledge is used to estimate the interesting modulating phase. There are several well known iterative schemes published before, but our approach has the unique advantage of being very fast. Our new temporal carrier, and phase estimator is capable of obtaining a very good approximation of their temporal carrier in a single iteration. Numerical experiments are given to show the performance of this simple yet powerful self-tuning phase shifting algorithm.

  20. Algorithm Animation with Galant.

    PubMed

    Stallmann, Matthias F

    2017-01-01

    Although surveys suggest positive student attitudes toward the use of algorithm animations, it is not clear that they improve learning outcomes. The Graph Algorithm Animation Tool, or Galant, challenges and motivates students to engage more deeply with algorithm concepts, without distracting them with programming language details or GUIs. Even though Galant is specifically designed for graph algorithms, it has also been used to animate other algorithms, most notably sorting algorithms.

  1. Demonstration of quantum permutation algorithm with a single photon ququart.

    PubMed

    Wang, Feiran; Wang, Yunlong; Liu, Ruifeng; Chen, Dongxu; Zhang, Pei; Gao, Hong; Li, Fuli

    2015-06-05

    We report an experiment to demonstrate a quantum permutation determining algorithm with linear optical system. By employing photon's polarization and spatial mode, we realize the quantum ququart states and all the essential permutation transformations. The quantum permutation determining algorithm displays the speedup of quantum algorithm by determining the parity of the permutation in only one step of evaluation compared with two for classical algorithm. This experiment is accomplished in single photon level and the method exhibits universality in high-dimensional quantum computation.

  2. Localized Ambient Solidity Separation Algorithm Based Computer User Segmentation.

    PubMed

    Sun, Xiao; Zhang, Tongda; Chai, Yueting; Liu, Yi

    2015-01-01

    Most of popular clustering methods typically have some strong assumptions of the dataset. For example, the k-means implicitly assumes that all clusters come from spherical Gaussian distributions which have different means but the same covariance. However, when dealing with datasets that have diverse distribution shapes or high dimensionality, these assumptions might not be valid anymore. In order to overcome this weakness, we proposed a new clustering algorithm named localized ambient solidity separation (LASS) algorithm, using a new isolation criterion called centroid distance. Compared with other density based isolation criteria, our proposed centroid distance isolation criterion addresses the problem caused by high dimensionality and varying density. The experiment on a designed two-dimensional benchmark dataset shows that our proposed LASS algorithm not only inherits the advantage of the original dissimilarity increments clustering method to separate naturally isolated clusters but also can identify the clusters which are adjacent, overlapping, and under background noise. Finally, we compared our LASS algorithm with the dissimilarity increments clustering method on a massive computer user dataset with over two million records that contains demographic and behaviors information. The results show that LASS algorithm works extremely well on this computer user dataset and can gain more knowledge from it.

  3. Localized Ambient Solidity Separation Algorithm Based Computer User Segmentation

    PubMed Central

    Sun, Xiao; Zhang, Tongda; Chai, Yueting; Liu, Yi

    2015-01-01

    Most of popular clustering methods typically have some strong assumptions of the dataset. For example, the k-means implicitly assumes that all clusters come from spherical Gaussian distributions which have different means but the same covariance. However, when dealing with datasets that have diverse distribution shapes or high dimensionality, these assumptions might not be valid anymore. In order to overcome this weakness, we proposed a new clustering algorithm named localized ambient solidity separation (LASS) algorithm, using a new isolation criterion called centroid distance. Compared with other density based isolation criteria, our proposed centroid distance isolation criterion addresses the problem caused by high dimensionality and varying density. The experiment on a designed two-dimensional benchmark dataset shows that our proposed LASS algorithm not only inherits the advantage of the original dissimilarity increments clustering method to separate naturally isolated clusters but also can identify the clusters which are adjacent, overlapping, and under background noise. Finally, we compared our LASS algorithm with the dissimilarity increments clustering method on a massive computer user dataset with over two million records that contains demographic and behaviors information. The results show that LASS algorithm works extremely well on this computer user dataset and can gain more knowledge from it. PMID:26221133

  4. What is a Systolic Algorithm?

    NASA Astrophysics Data System (ADS)

    Rao, Sailesh K.; Kollath, T.

    1986-07-01

    In this paper, we show that every systolic array executes a Regular Iterative Algorithm with a strongly separating hyperplane and conversely, that every such algorithm can be implemented on a systolic array. This characterization provides us with an unified framework for describing the contributions of other authors. It also exposes the relevance of many fundamental concepts that were introduced in the sixties by Hennie, Waite and Karp, Miller and Winograd, to the present day concern of systolic array

  5. A Modified Decision Tree Algorithm Based on Genetic Algorithm for Mobile User Classification Problem

    PubMed Central

    Liu, Dong-sheng; Fan, Shu-jiang

    2014-01-01

    In order to offer mobile customers better service, we should classify the mobile user firstly. Aimed at the limitations of previous classification methods, this paper puts forward a modified decision tree algorithm for mobile user classification, which introduced genetic algorithm to optimize the results of the decision tree algorithm. We also take the context information as a classification attributes for the mobile user and we classify the context into public context and private context classes. Then we analyze the processes and operators of the algorithm. At last, we make an experiment on the mobile user with the algorithm, we can classify the mobile user into Basic service user, E-service user, Plus service user, and Total service user classes and we can also get some rules about the mobile user. Compared to C4.5 decision tree algorithm and SVM algorithm, the algorithm we proposed in this paper has higher accuracy and more simplicity. PMID:24688389

  6. A modified decision tree algorithm based on genetic algorithm for mobile user classification problem.

    PubMed

    Liu, Dong-sheng; Fan, Shu-jiang

    2014-01-01

    In order to offer mobile customers better service, we should classify the mobile user firstly. Aimed at the limitations of previous classification methods, this paper puts forward a modified decision tree algorithm for mobile user classification, which introduced genetic algorithm to optimize the results of the decision tree algorithm. We also take the context information as a classification attributes for the mobile user and we classify the context into public context and private context classes. Then we analyze the processes and operators of the algorithm. At last, we make an experiment on the mobile user with the algorithm, we can classify the mobile user into Basic service user, E-service user, Plus service user, and Total service user classes and we can also get some rules about the mobile user. Compared to C4.5 decision tree algorithm and SVM algorithm, the algorithm we proposed in this paper has higher accuracy and more simplicity.

  7. Multimodal Estimation of Distribution Algorithms.

    PubMed

    Yang, Qiang; Chen, Wei-Neng; Li, Yun; Chen, C L Philip; Xu, Xiang-Min; Zhang, Jun

    2016-02-15

    Taking the advantage of estimation of distribution algorithms (EDAs) in preserving high diversity, this paper proposes a multimodal EDA. Integrated with clustering strategies for crowding and speciation, two versions of this algorithm are developed, which operate at the niche level. Then these two algorithms are equipped with three distinctive techniques: 1) a dynamic cluster sizing strategy; 2) an alternative utilization of Gaussian and Cauchy distributions to generate offspring; and 3) an adaptive local search. The dynamic cluster sizing affords a potential balance between exploration and exploitation and reduces the sensitivity to the cluster size in the niching methods. Taking advantages of Gaussian and Cauchy distributions, we generate the offspring at the niche level through alternatively using these two distributions. Such utilization can also potentially offer a balance between exploration and exploitation. Further, solution accuracy is enhanced through a new local search scheme probabilistically conducted around seeds of niches with probabilities determined self-adaptively according to fitness values of these seeds. Extensive experiments conducted on 20 benchmark multimodal problems confirm that both algorithms can achieve competitive performance compared with several state-of-the-art multimodal algorithms, which is supported by nonparametric tests. Especially, the proposed algorithms are very promising for complex problems with many local optima.

  8. Quantitative validation of a new coregistration algorithm

    SciTech Connect

    Pickar, R.D.; Esser, P.D.; Pozniakoff, T.A.; Van Heertum, R.L.; Stoddart, H.A. Jr.

    1995-08-01

    A new coregistration software package, Neuro9OO Image Coregistration software, has been developed specifically for nuclear medicine. With this algorithm, the correlation coefficient is maximized between volumes generated from sets of transaxial slices. No localization markers or segmented surfaces are needed. The coregistration program was evaluated for translational and rotational registration accuracy. A Tc-99m HM-PAO split-dose study (0.53 mCi low dose, L, and 1.01 mCi high dose, H) was simulated with a Hoffman Brain Phantom with five fiducial markers. Translation error was determined by a shift in image centroid, and rotation error was determined by a simplified two-axis approach. Changes in registration accuracy were measured with respect to: (1) slice spacing, using the four different combinations LL, LH, HL, HH, (2) translational and rotational misalignment before coregistration, (3) changes in the step size of the iterative parameters. In all the cases the algorithm converged with only small difference in translation offset, 0 and 0. At 6 nun slice spacing, translational efforts ranged from 0.9 to 2.8 mm (system resolution at 100 mm, 6.8 mm). The converged parameters showed little sensitivity to count density. In addition the correlation coefficient increased with decreasing iterative step size, as expected. From these experiments, the authors found that this algorithm based on the maximization of the correlation coefficient between studies was an accurate way to coregister SPECT brain images.

  9. An enhanced algorithm to estimate BDS satellite's differential code biases

    NASA Astrophysics Data System (ADS)

    Shi, Chuang; Fan, Lei; Li, Min; Liu, Zhizhao; Gu, Shengfeng; Zhong, Shiming; Song, Weiwei

    2016-02-01

    This paper proposes an enhanced algorithm to estimate the differential code biases (DCB) on three frequencies of the BeiDou Navigation Satellite System (BDS) satellites. By forming ionospheric observables derived from uncombined precise point positioning and geometry-free linear combination of phase-smoothed range, satellite DCBs are determined together with ionospheric delay that is modeled at each individual station. Specifically, the DCB and ionospheric delay are estimated in a weighted least-squares estimator by considering the precision of ionospheric observables, and a misclosure constraint for different types of satellite DCBs is introduced. This algorithm was tested by GNSS data collected in November and December 2013 from 29 stations of Multi-GNSS Experiment (MGEX) and BeiDou Experimental Tracking Stations. Results show that the proposed algorithm is able to precisely estimate BDS satellite DCBs, where the mean value of day-to-day scattering is about 0.19 ns and the RMS of the difference with respect to MGEX DCB products is about 0.24 ns. In order to make comparison, an existing algorithm based on IGG: Institute of Geodesy and Geophysics, China (IGGDCB), is also used to process the same dataset. Results show that, the DCB difference between results from the enhanced algorithm and the DCB products from Center for Orbit Determination in Europe (CODE) and MGEX is reduced in average by 46 % for GPS satellites and 14 % for BDS satellites, when compared with DCB difference between the results of IGGDCB algorithm and the DCB products from CODE and MGEX. In addition, we find the day-to-day scattering of BDS IGSO satellites is obviously lower than that of GEO and MEO satellites, and a significant bias exists in daily DCB values of GEO satellites comparing with MGEX DCB product. This proposed algorithm also provides a new approach to estimate the satellite DCBs of multiple GNSS systems.

  10. Advanced spectral signature discrimination algorithm

    NASA Astrophysics Data System (ADS)

    Chakravarty, Sumit; Cao, Wenjie; Samat, Alim

    2013-05-01

    This paper presents a novel approach to the task of hyperspectral signature analysis. Hyperspectral signature analysis has been studied a lot in literature and there has been a lot of different algorithms developed which endeavors to discriminate between hyperspectral signatures. There are many approaches for performing the task of hyperspectral signature analysis. Binary coding approaches like SPAM and SFBC use basic statistical thresholding operations to binarize a signature which are then compared using Hamming distance. This framework has been extended to techniques like SDFC wherein a set of primate structures are used to characterize local variations in a signature together with the overall statistical measures like mean. As we see such structures harness only local variations and do not exploit any covariation of spectrally distinct parts of the signature. The approach of this research is to harvest such information by the use of a technique similar to circular convolution. In the approach we consider the signature as cyclic by appending the two ends of it. We then create two copies of the spectral signature. These three signatures can be placed next to each other like the rotating discs of a combination lock. We then find local structures at different circular shifts between the three cyclic spectral signatures. Texture features like in SDFC can be used to study the local structural variation for each circular shift. We can then create different measure by creating histogram from the shifts and thereafter using different techniques for information extraction from the histograms. Depending on the technique used different variant of the proposed algorithm are obtained. Experiments using the proposed technique show the viability of the proposed methods and their performances as compared to current binary signature coding techniques.

  11. IIR algorithms for adaptive line enhancement

    SciTech Connect

    David, R.A.; Stearns, S.D.; Elliott, G.R.; Etter, D.M.

    1983-01-01

    We introduce a simple IIR structure for the adaptive line enhancer. Two algorithms based on gradient-search techniques are presented for adapting the structure. Results from experiments which utilized real data as well as computer simulations are provided.

  12. Educational Outreach: The Space Science Road Show

    NASA Astrophysics Data System (ADS)

    Cox, N. L. J.

    2002-01-01

    The poster presented will give an overview of a study towards a "Space Road Show". The topic of this show is space science. The target group is adolescents, aged 12 to 15, at Dutch high schools. The show and its accompanying experiments would be supported with suitable educational material. Science teachers at schools can decide for themselves if they want to use this material in advance, afterwards or not at all. The aims of this outreach effort are: to motivate students for space science and engineering, to help them understand the importance of (space) research, to give them a positive feeling about the possibilities offered by space and in the process give them useful knowledge on space basics. The show revolves around three main themes: applications, science and society. First the students will get some historical background on the importance of space/astronomy to civilization. Secondly they will learn more about novel uses of space. On the one hand they will learn of "Views on Earth" involving technologies like Remote Sensing (or Spying), Communication, Broadcasting, GPS and Telemedicine. On the other hand they will experience "Views on Space" illustrated by past, present and future space research missions, like the space exploration missions (Cassini/Huygens, Mars Express and Rosetta) and the astronomy missions (Soho and XMM). Meanwhile, the students will learn more about the technology of launchers and satellites needed to accomplish these space missions. Throughout the show and especially towards the end attention will be paid to the third theme "Why go to space"? Other reasons for people to get into space will be explored. An important question in this is the commercial (manned) exploration of space. Thus, the questions of benefit of space to society are integrated in the entire show. It raises some fundamental questions about the effects of space travel on our environment, poverty and other moral issues. The show attempts to connect scientific with

  13. TaDb: A time-aware diffusion-based recommender algorithm

    NASA Astrophysics Data System (ADS)

    Li, Wen-Jun; Xu, Yuan-Yuan; Dong, Qiang; Zhou, Jun-Lin; Fu, Yan

    2015-02-01

    Traditional recommender algorithms usually employ the early and recent records indiscriminately, which overlooks the change of user interests over time. In this paper, we show that the interests of a user remain stable in a short-term interval and drift during a long-term period. Based on this observation, we propose a time-aware diffusion-based (TaDb) recommender algorithm, which assigns different temporal weights to the leading links existing before the target user's collection and the following links appearing after that in the diffusion process. Experiments on four real datasets, Netflix, MovieLens, FriendFeed and Delicious show that TaDb algorithm significantly improves the prediction accuracy compared with the algorithms not considering temporal effects.

  14. Breadth-First Search-Based Single-Phase Algorithms for Bridge Detection in Wireless Sensor Networks

    PubMed Central

    Akram, Vahid Khalilpour; Dagdeviren, Orhan

    2013-01-01

    Wireless sensor networks (WSNs) are promising technologies for exploring harsh environments, such as oceans, wild forests, volcanic regions and outer space. Since sensor nodes may have limited transmission range, application packets may be transmitted by multi-hop communication. Thus, connectivity is a very important issue. A bridge is a critical edge whose removal breaks the connectivity of the network. Hence, it is crucial to detect bridges and take preventions. Since sensor nodes are battery-powered, services running on nodes should consume low energy. In this paper, we propose energy-efficient and distributed bridge detection algorithms for WSNs. Our algorithms run single phase and they are integrated with the Breadth-First Search (BFS) algorithm, which is a popular routing algorithm. Our first algorithm is an extended version of Milic's algorithm, which is designed to reduce the message length. Our second algorithm is novel and uses ancestral knowledge to detect bridges. We explain the operation of the algorithms, analyze their proof of correctness, message, time, space and computational complexities. To evaluate practical importance, we provide testbed experiments and extensive simulations. We show that our proposed algorithms provide less resource consumption, and the energy savings of our algorithms are up by 5.5-times. PMID:23845930

  15. The BR eigenvalue algorithm

    SciTech Connect

    Geist, G.A.; Howell, G.W.; Watkins, D.S.

    1997-11-01

    The BR algorithm, a new method for calculating the eigenvalues of an upper Hessenberg matrix, is introduced. It is a bulge-chasing algorithm like the QR algorithm, but, unlike the QR algorithm, it is well adapted to computing the eigenvalues of the narrowband, nearly tridiagonal matrices generated by the look-ahead Lanczos process. This paper describes the BR algorithm and gives numerical evidence that it works well in conjunction with the Lanczos process. On the biggest problems run so far, the BR algorithm beats the QR algorithm by a factor of 30--60 in computing time and a factor of over 100 in matrix storage space.

  16. WDM Multicast Tree Construction Algorithms and Their Comparative Evaluations

    NASA Astrophysics Data System (ADS)

    Makabe, Tsutomu; Mikoshi, Taiju; Takenaka, Toyofumi

    We propose novel tree construction algorithms for multicast communication in photonic networks. Since multicast communications consume many more link resources than unicast communications, effective algorithms for route selection and wavelength assignment are required. We propose a novel tree construction algorithm, called the Weighted Steiner Tree (WST) algorithm and a variation of the WST algorithm, called the Composite Weighted Steiner Tree (CWST) algorithm. Because these algorithms are based on the Steiner Tree algorithm, link resources among source and destination pairs tend to be commonly used and link utilization ratios are improved. Because of this, these algorithms can accept many more multicast requests than other multicast tree construction algorithms based on the Dijkstra algorithm. However, under certain delay constraints, the blocking characteristics of the proposed Weighted Steiner Tree algorithm deteriorate since some light paths between source and destinations use many hops and cannot satisfy the delay constraint. In order to adapt the approach to the delay-sensitive environments, we have devised the Composite Weighted Steiner Tree algorithm comprising the Weighted Steiner Tree algorithm and the Dijkstra algorithm for use in a delay constrained environment such as an IPTV application. In this paper, we also give the results of simulation experiments which demonstrate the superiority of the proposed Composite Weighted Steiner Tree algorithm compared with the Distributed Minimum Hop Tree (DMHT) algorithm, from the viewpoint of the light-tree request blocking.

  17. Improved pulse laser ranging algorithm based on high speed sampling

    NASA Astrophysics Data System (ADS)

    Gao, Xuan-yi; Qian, Rui-hai; Zhang, Yan-mei; Li, Huan; Guo, Hai-chao; He, Shi-jie; Guo, Xiao-kang

    2016-10-01

    Narrow pulse laser ranging achieves long-range target detection using laser pulse with low divergent beams. Pulse laser ranging is widely used in military, industrial, civil, engineering and transportation field. In this paper, an improved narrow pulse laser ranging algorithm is studied based on the high speed sampling. Firstly, theoretical simulation models have been built and analyzed including the laser emission and pulse laser ranging algorithm. An improved pulse ranging algorithm is developed. This new algorithm combines the matched filter algorithm and the constant fraction discrimination (CFD) algorithm. After the algorithm simulation, a laser ranging hardware system is set up to implement the improved algorithm. The laser ranging hardware system includes a laser diode, a laser detector and a high sample rate data logging circuit. Subsequently, using Verilog HDL language, the improved algorithm is implemented in the FPGA chip based on fusion of the matched filter algorithm and the CFD algorithm. Finally, the laser ranging experiment is carried out to test the improved algorithm ranging performance comparing to the matched filter algorithm and the CFD algorithm using the laser ranging hardware system. The test analysis result demonstrates that the laser ranging hardware system realized the high speed processing and high speed sampling data transmission. The algorithm analysis result presents that the improved algorithm achieves 0.3m distance ranging precision. The improved algorithm analysis result meets the expected effect, which is consistent with the theoretical simulation.

  18. Evaluating and comparing algorithms for respiratory motion prediction

    NASA Astrophysics Data System (ADS)

    Ernst, F.; Dürichen, R.; Schlaefer, A.; Schweikard, A.

    2013-06-01

    In robotic radiosurgery, it is necessary to compensate for systematic latencies arising from target tracking and mechanical constraints. This compensation is usually achieved by means of an algorithm which computes the future target position. In most scientific works on respiratory motion prediction, only one or two algorithms are evaluated on a limited amount of very short motion traces. The purpose of this work is to gain more insight into the real world capabilities of respiratory motion prediction methods by evaluating many algorithms on an unprecedented amount of data. We have evaluated six algorithms, the normalized least mean squares (nLMS), recursive least squares (RLS), multi-step linear methods (MULIN), wavelet-based multiscale autoregression (wLMS), extended Kalman filtering, and ε-support vector regression (SVRpred) methods, on an extensive database of 304 respiratory motion traces. The traces were collected during treatment with the CyberKnife (Accuray, Inc., Sunnyvale, CA, USA) and feature an average length of 71 min. Evaluation was done using a graphical prediction toolkit, which is available to the general public, as is the data we used. The experiments show that the nLMS algorithm—which is one of the algorithms currently used in the CyberKnife—is outperformed by all other methods. This is especially true in the case of the wLMS, the SVRpred, and the MULIN algorithms, which perform much better. The nLMS algorithm produces a relative root mean square (RMS) error of 75% or less (i.e., a reduction in error of 25% or more when compared to not doing prediction) in only 38% of the test cases, whereas the MULIN and SVRpred methods reach this level in more than 77%, the wLMS algorithm in more than 84% of the test cases. Our work shows that the wLMS algorithm is the most accurate algorithm and does not require parameter tuning, making it an ideal candidate for clinical implementation. Additionally, we have seen that the structure of a patient

  19. Computational algorithms to predict Gene Ontology annotations

    PubMed Central

    2015-01-01

    Background Gene function annotations, which are associations between a gene and a term of a controlled vocabulary describing gene functional features, are of paramount importance in modern biology. Datasets of these annotations, such as the ones provided by the Gene Ontology Consortium, are used to design novel biological experiments and interpret their results. Despite their importance, these sources of information have some known issues. They are incomplete, since biological knowledge is far from being definitive and it rapidly evolves, and some erroneous annotations may be present. Since the curation process of novel annotations is a costly procedure, both in economical and time terms, computational tools that can reliably predict likely annotations, and thus quicken the discovery of new gene annotations, are very useful. Methods We used a set of computational algorithms and weighting schemes to infer novel gene annotations from a set of known ones. We used the latent semantic analysis approach, implementing two popular algorithms (Latent Semantic Indexing and Probabilistic Latent Semantic Analysis) and propose a novel method, the Semantic IMproved Latent Semantic Analysis, which adds a clustering step on the set of considered genes. Furthermore, we propose the improvement of these algorithms by weighting the annotations in the input set. Results We tested our methods and their weighted variants on the Gene Ontology annotation sets of three model organism genes (Bos taurus, Danio rerio and Drosophila melanogaster ). The methods showed their ability in predicting novel gene annotations and the weighting procedures demonstrated to lead to a valuable improvement, although the obtained results vary according to the dimension of the input annotation set and the considered algorithm. Conclusions Out of the three considered methods, the Semantic IMproved Latent Semantic Analysis is the one that provides better results. In particular, when coupled with a proper

  20. Tilted cone beam VCT reconstruction algorithm

    NASA Astrophysics Data System (ADS)

    Hsieh, Jiang; Tang, Xiangyang

    2005-04-01

    Reconstruction algorithms for volumetric CT have been the focus of many studies. Several exact and approximate reconstruction algorithms have been proposed for step-and-shoot and helical scanning trajectories to combat cone beam related artifacts. In this paper, we present a closed form cone beam reconstruction formula for tilted gantry data acquisition. Although several algorithms were proposed to compensate for errors induced by the gantry tilt, none of the algorithms addresses the case in which the cone beam geometry is first rebinned to a set of parallel beams prior to the filtered backprojection. Because of the rebinning process, the amount of iso-center adjustment depends not only on the projection angle and tilt angle, but also on the reconstructed pixel location. The proposed algorithm has been tested extensively on both 16 and 64 slice VCT with phantoms and clinical data. The efficacy of the algorithm is clearly demonstrated by the experiments.

  1. Fast Outlier Detection Using a Grid-Based Algorithm.

    PubMed

    Lee, Jihwan; Cho, Nam-Wook

    2016-01-01

    As one of data mining techniques, outlier detection aims to discover outlying observations that deviate substantially from the reminder of the data. Recently, the Local Outlier Factor (LOF) algorithm has been successfully applied to outlier detection. However, due to the computational complexity of the LOF algorithm, its application to large data with high dimension has been limited. The aim of this paper is to propose grid-based algorithm that reduces the computation time required by the LOF algorithm to determine the k-nearest neighbors. The algorithm divides the data spaces in to a smaller number of regions, called as a "grid", and calculates the LOF value of each grid. To examine the effectiveness of the proposed method, several experiments incorporating different parameters were conducted. The proposed method demonstrated a significant computation time reduction with predictable and acceptable trade-off errors. Then, the proposed methodology was successfully applied to real database transaction logs of Korea Atomic Energy Research Institute. As a result, we show that for a very large dataset, the grid-LOF can be considered as an acceptable approximation for the original LOF. Moreover, it can also be effectively used for real-time outlier detection.

  2. A novel image fusion algorithm based on human vision system

    NASA Astrophysics Data System (ADS)

    Miao, Qiguang; Wang, Baoshu

    2006-04-01

    The proposed new fusion algorithm is based on the improved pulse coupled neural network(PCNN) model, the fundamental characteristics of images and the properties of human vision system. Compared with the traditional algorithm where the linking strength of each neuron is the same and its value is chosen through experimentation, this algorithm uses the contrast of each pixel as its value, so that the linking strength of each pixel can be chosen adaptively. After the processing of PCNN with the adaptive linking strength, new fire mapping images are obtained for each image taking part in the fusion. The clear objects of each original image are decided by the compare-selection operator with the fire mapping images pixel by pixel and then all of them are merged into a new clear image. Furthermore, by this algorithm, other parameters, for example, Δ, the threshold adjusting constant, only have a slight effect on the new fused image. It therefore overcomes the difficulty in adjusting parameters in PCNN. Experiments show that the proposed algorithm works better in preserving the edge and texture information than the wavelet transform method and the Laplacian pyramid method do image fusion.

  3. [Research on target identification by multi-spectrum separation algorithm].

    PubMed

    Liu, Li-xia; Zhuang, Yi-qi

    2010-10-01

    In view of problems such as field poor shock resistance, low target identification rate, low real-time and so on in mechanical scanning optical system, a non-scanning target identification remote sensing system was designed using the multi-spectrum separation algorithm. Using the non-scanning M-Z interferometer to provide a space optical path difference, interference fringes were collected by infrared CCD detector. After CUP processing the system obtains the mix spectrum information, achieves target identification by the coordinate system combined with visible light video image, and the coordinate system, which the union visible light video image provides, achieves the target discrimination The genetic algorithm was used to optimize characteristic wavelengths, and then by the rough collection classification the unknown target spectrum's attribute was extracted. Taking first 1/3 confidence level of the corresponding attribute the testing target type was deduced, and compared with the traditional algorithm the amount of computing was reduced by about nine times. Experiment was done under different weather and different background conditions, so detection limits and identification probabilities of the system under different conditions were obtained. The experimental data showed that the genetic algorithm and rough set classification combined with multi-spectral separation algorithm can quickly and efficiently identify the unknown object types.

  4. Generalized quantum counting algorithm for non-uniform amplitude distribution

    NASA Astrophysics Data System (ADS)

    Tan, Jianing; Ruan, Yue; Li, Xi; Chen, Hanwu

    2017-03-01

    We give generalized quantum counting algorithm to increase universality of quantum counting algorithm. Non-uniform initial amplitude distribution is possible due to the diversity of situations on counting problems or external noise in the amplitude initialization procedure. We give the reason why quantum counting algorithm is invalid on this situation. By modeling in three-dimensional space spanned by unmarked state, marked state and free state to the entire Hilbert space of n qubits, we find Grover iteration can be regarded as improper rotation in the space. This allows us to give formula to solve counting problem. Furthermore, we express initial amplitude distribution in the eigenvector basis of improper rotation matrix. This is necessary to obtain mathematical analysis of counting problem on various situations. Finally, we design four simulation experiments, the results of which show that compared with original quantum counting algorithm, generalized quantum counting algorithm wins great satisfaction from three aspects: (1) Whether initial amplitude distribution is uniform; (2) the diversity of situations on counting problems; and (3) whether phase estimation technique can get phase exactly.

  5. Parallel Directionally Split Solver Based on Reformulation of Pipelined Thomas Algorithm

    NASA Technical Reports Server (NTRS)

    Povitsky, A.

    1998-01-01

    In this research an efficient parallel algorithm for 3-D directionally split problems is developed. The proposed algorithm is based on a reformulated version of the pipelined Thomas algorithm that starts the backward step computations immediately after the completion of the forward step computations for the first portion of lines This algorithm has data available for other computational tasks while processors are idle from the Thomas algorithm. The proposed 3-D directionally split solver is based on the static scheduling of processors where local and non-local, data-dependent and data-independent computations are scheduled while processors are idle. A theoretical model of parallelization efficiency is used to define optimal parameters of the algorithm, to show an asymptotic parallelization penalty and to obtain an optimal cover of a global domain with subdomains. It is shown by computational experiments and by the theoretical model that the proposed algorithm reduces the parallelization penalty about two times over the basic algorithm for the range of the number of processors (subdomains) considered and the number of grid nodes per subdomain.

  6. Iterative optimization algorithm with parameter estimation for the ambulance location problem.

    PubMed

    Kim, Sun Hoon; Lee, Young Hoon

    2016-12-01

    The emergency vehicle location problem to determine the number of ambulance vehicles and their locations satisfying a required reliability level is investigated in this study. This is a complex nonlinear issue involving critical decision making that has inherent stochastic characteristics. This paper studies an iterative optimization algorithm with parameter estimation to solve the emergency vehicle location problem. In the suggested algorithm, a linear model determines the locations of ambulances, while a hypercube simulation is used to estimate and provide parameters regarding ambulance locations. First, we suggest an iterative hypercube optimization algorithm in which interaction parameters and rules for the hypercube and optimization are identified. The interaction rules employed in this study enable our algorithm to always find the locations of ambulances satisfying the reliability requirement. We also propose an iterative simulation optimization algorithm in which the hypercube method is replaced by a simulation, to achieve computational efficiency. The computational experiments show that the iterative simulation optimization algorithm performs equivalently to the iterative hypercube optimization. The suggested algorithms are found to outperform existing algorithms suggested in the literature.

  7. The new image segmentation algorithm using adaptive evolutionary programming and fuzzy c-means clustering

    NASA Astrophysics Data System (ADS)

    Liu, Fang

    2011-06-01

    Image segmentation remains one of the major challenges in image analysis and computer vision. Fuzzy clustering, as a soft segmentation method, has been widely studied and successfully applied in mage clustering and segmentation. The fuzzy c-means (FCM) algorithm is the most popular method used in mage segmentation. However, most clustering algorithms such as the k-means and the FCM clustering algorithms search for the final clusters values based on the predetermined initial centers. The FCM clustering algorithms does not consider the space information of pixels and is sensitive to noise. In the paper, presents a new fuzzy c-means (FCM) algorithm with adaptive evolutionary programming that provides image clustering. The features of this algorithm are: 1) firstly, it need not predetermined initial centers. Evolutionary programming will help FCM search for better center and escape bad centers at local minima. Secondly, the spatial distance and the Euclidean distance is also considered in the FCM clustering. So this algorithm is more robust to the noises. Thirdly, the adaptive evolutionary programming is proposed. The mutation rule is adaptively changed with learning the useful knowledge in the evolving process. Experiment results shows that the new image segmentation algorithm is effective. It is providing robustness to noisy images.

  8. A novel mating approach for genetic algorithms.

    PubMed

    Galán, Severino F; Mengshoel, Ole J; Pinter, Rafael

    2013-01-01

    Genetic algorithms typically use crossover, which relies on mating a set of selected parents. As part of crossover, random mating is often carried out. A novel approach to parent mating is presented in this work. Our novel approach can be applied in combination with a traditional similarity-based criterion to measure distance between individuals or with a fitness-based criterion. We introduce a parameter called the mating index that allows different mating strategies to be developed within a uniform framework: an exploitative strategy called best-first, an explorative strategy called best-last, and an adaptive strategy called self-adaptive. Self-adaptive mating is defined in the context of the novel algorithm, and aims to achieve a balance between exploitation and exploration in a domain-independent manner. The present work formally defines the novel mating approach, analyzes its behavior, and conducts an extensive experimental study to quantitatively determine its benefits. In the domain of real function optimization, the experiments show that, as the degree of multimodality of the function at hand grows, increasing the mating index improves performance. In the case of the self-adaptive mating strategy, the experiments give strong results for several case studies.

  9. Saving Resources with Plagues in Genetic Algorithms

    SciTech Connect

    de Vega, F F; Cantu-Paz, E; Lopez, J I; Manzano, T

    2004-06-15

    The population size of genetic algorithms (GAs) affects the quality of the solutions and the time required to find them. While progress has been made in estimating the population sizes required to reach a desired solution quality for certain problems, in practice the sizing of populations is still usually performed by trial and error. These trials might lead to find a population that is large enough to reach a satisfactory solution, but there may still be opportunities to optimize the computational cost by reducing the size of the population. This paper presents a technique called plague that periodically removes a number of individuals from the population as the GA executes. Recently, the usefulness of the plague has been demonstrated for genetic programming. The objective of this paper is to extend the study of plagues to genetic algorithms. We experiment with deceptive trap functions, a tunable difficult problem for GAs, and the experiments show that plagues can save computational time while maintaining solution quality and reliability.

  10. GenClust: A genetic algorithm for clustering gene expression data

    PubMed Central

    Di Gesú, Vito; Giancarlo, Raffaele; Lo Bosco, Giosué; Raimondi, Alessandra; Scaturro, Davide

    2005-01-01

    Background Clustering is a key step in the analysis of gene expression data, and in fact, many classical clustering algorithms are used, or more innovative ones have been designed and validated for the task. Despite the widespread use of artificial intelligence techniques in bioinformatics and, more generally, data analysis, there are very few clustering algorithms based on the genetic paradigm, yet that paradigm has great potential in finding good heuristic solutions to a difficult optimization problem such as clustering. Results GenClust is a new genetic algorithm for clustering gene expression data. It has two key features: (a) a novel coding of the search space that is simple, compact and easy to update; (b) it can be used naturally in conjunction with data driven internal validation methods. We have experimented with the FOM methodology, specifically conceived for validating clusters of gene expression data. The validity of GenClust has been assessed experimentally on real data sets, both with the use of validation measures and in comparison with other algorithms, i.e., Average Link, Cast, Click and K-means. Conclusion Experiments show that none of the algorithms we have used is markedly superior to the others across data sets and validation measures; i.e., in many cases the observed differences between the worst and best performing algorithm may be statistically insignificant and they could be considered equivalent. However, there are cases in which an algorithm may be better than others and therefore worthwhile. In particular, experiments for GenClust show that, although simple in its data representation, it converges very rapidly to a local optimum and that its ability to identify meaningful clusters is comparable, and sometimes superior, to that of more sophisticated algorithms. In addition, it is well suited for use in conjunction with data driven internal validation measures and, in particular, the FOM methodology. PMID:16336639

  11. A multi-template combination algorithm for protein comparative modeling

    PubMed Central

    Cheng, Jianlin

    2008-01-01

    Background Multiple protein templates are commonly used in manual protein structure prediction. However, few automated algorithms of selecting and combining multiple templates are available. Results Here we develop an effective multi-template combination algorithm for protein comparative modeling. The algorithm selects templates according to the similarity significance of the alignments between template and target proteins. It combines the whole template-target alignments whose similarity significance score is close to that of the top template-target alignment within a threshold, whereas it only takes alignment fragments from a less similar template-target alignment that align with a sizable uncovered region of the target. We compare the algorithm with the traditional method of using a single top template on the 45 comparative modeling targets (i.e. easy template-based modeling targets) used in the seventh edition of Critical Assessment of Techniques for Protein Structure Prediction (CASP7). The multi-template combination algorithm improves the GDT-TS scores of predicted models by 6.8% on average. The statistical analysis shows that the improvement is significant (p-value < 10-4). Compared with the ideal approach that always uses the best template, the multi-template approach yields only slightly better performance. During the CASP7 experiment, the preliminary implementation of the multi-template combination algorithm (FOLDpro) was ranked second among 67 servers in the category of high-accuracy structure prediction in terms of GDT-TS measure. Conclusion We have developed a novel multi-template algorithm to improve protein comparative modeling. PMID:18366648

  12. Uses of clinical algorithms.

    PubMed

    Margolis, C Z

    1983-02-04

    The clinical algorithm (flow chart) is a text format that is specially suited for representing a sequence of clinical decisions, for teaching clinical decision making, and for guiding patient care. A representative clinical algorithm is described in detail; five steps for writing an algorithm and seven steps for writing a set of algorithms are outlined. Five clinical education and patient care uses of algorithms are then discussed, including a map for teaching clinical decision making and protocol charts for guiding step-by-step care of specific problems. Clinical algorithms are compared as to their clinical usefulness with decision analysis. Three objections to clinical algorithms are answered, including the one that they restrict thinking. It is concluded that methods should be sought for writing clinical algorithms that represent expert consensus. A clinical algorithm could then be written for any area of medical decision making that can be standardized. Medical practice could then be taught more effectively, monitored accurately, and understood better.

  13. First use of cognitive algorithms in investigations under compensated gravity.

    PubMed

    Delgado, A; Nirschl, H; Becker, T h

    1996-01-01

    In the present paper the use of cognitive algorithms for solving a wide spectrum of problems which often arise in investigations under compensated gravity is suggested. Applying such algorithms in the preparation and performance of experiments provides a substantial assistance to the experimentator as the behaviour of complex processes can be described and predicted correctly even when unexpected perturbations occur. Furthermore, an essential advantage of cognitive computing consists in the fact that the description and optimisation of the processes considered are possible also in such cases in which the corresponding basic equations are not known or not treatable practically. For convenience, the basic ideas of cognitive algorithms are discussed here. Due to their special relevance for investigations under compensated gravity algorithms based on fuzzy logic (FL) and artificial neuronal networks (ANN) are elucidated more in detail. In order to illustrate some advantages of cognitive computing exemplary results for the flow field induced by coaxial rotating disks are given. This represents the first attempt to use the benefits provided by cognitive algorithms in investigations under compensated gravity. The flow field between rotating disks plays an important role not only in experiments under compensated gravity but also in a wide range of terrestrial applications. A comparison of the results found by solving the Navier-Stokes equations and those from the prediction performed by ANN adequately trained shows an excellent agreement. However, the calculation times needed by the ANN are significantly smaller than that of the direct numerical simulation. Therefore, the real time prediction of the results from a running experiment seems to be possible.

  14. Rotational Invariant Dimensionality Reduction Algorithms.

    PubMed

    Lai, Zhihui; Xu, Yong; Yang, Jian; Shen, Linlin; Zhang, David

    2016-06-30

    A common intrinsic limitation of the traditional subspace learning methods is the sensitivity to the outliers and the image variations of the object since they use the L₂ norm as the metric. In this paper, a series of methods based on the L₂,₁-norm are proposed for linear dimensionality reduction. Since the L₂,₁-norm based objective function is robust to the image variations, the proposed algorithms can perform robust image feature extraction for classification. We use different ideas to design different algorithms and obtain a unified rotational invariant (RI) dimensionality reduction framework, which extends the well-known graph embedding algorithm framework to a more generalized form. We provide the comprehensive analyses to show the essential properties of the proposed algorithm framework. This paper indicates that the optimization problems have global optimal solutions when all the orthogonal projections of the data space are computed and used. Experimental results on popular image datasets indicate that the proposed RI dimensionality reduction algorithms can obtain competitive performance compared with the previous L₂ norm based subspace learning algorithms.

  15. Region labeling algorithm based on boundary tracking for binary image

    NASA Astrophysics Data System (ADS)

    Chen, Li; Yang, Yang; Cen, Zhaofeng; Li, Xiaotong

    2010-11-01

    Region labeling for binary image is an important part of image processing. For the special use of small and multi-objects labeling, a new region labeling algorithm based on boundary tracking is proposed in this paper. Experiments prove that our algorithm is feasible and efficient, and even faster than some of other algorithms.

  16. Award 1 Title: Acoustic Communications 2011 Experiment: Deployment Support and Post Experiment Data Handling and Analysis. Award 2 Title: Exploiting Structured Dependencies in the Design of Adaptive Algorithms for Underwater Communication Award. 3 Title: Coupled Research in Ocean Acoustics and Signal Processing for the Next Generation of Underwater Acoustic Communication Systems

    DTIC Science & Technology

    2015-09-30

    Exploiting Structured Dependencies in the Design of Adaptive Algorithms for Underwater Communication Award #3 Title Coupled Research in Ocean Acoustics...depend on the physical oceanography and pushing the state of the art in our understanding of adaptive signal processing algorithms relevant to...deployable VHF acoustic data transmission and acquisition system. 3. Develop signal models and processing algorithms that reduce to the extent

  17. Hubble Space Telescope characterized by using phase-retrieval algorithms.

    PubMed

    Fienup, J R; Marron, J C; Schulz, T J; Seldin, J H

    1993-04-01

    We describe several results characterizing the Hubble Space Telescope from measured point spread functions by using phase-retrieval algorithms. The Cramer-Rao lower bounds show that point spread functions taken well out of focus result in smaller errors when aberrations are estimated and that, for those images, photon noise is not a limiting factor. Reconstruction experiments with both simulated and real data show that the calculation of wave-front propagation by the retrieval algorithms must be performed with a multiple-plane propagation rather than a simple fast Fourier transform to ensure the high accuracy required. Pupil reconstruction was performed and indicates a misalignment of the optical axis of a camera relay telescope relative to the main telescope. After we accounted for measured spherical aberration in the relay telescope, our estimate of the conic constant of the primary mirror of the HST was - 1.0144.

  18. Genetic Algorithms Viewed as Anticipatory Systems

    NASA Astrophysics Data System (ADS)

    Mocanu, Irina; Kalisz, Eugenia; Negreanu, Lorina

    2010-11-01

    This paper proposes a new version of genetic algorithms—the anticipatory genetic algorithm AGA. The performance evaluation included in the paper shows that AGA is superior to traditional genetic algorithm from both speed and accuracy points of view. The paper also presents how this algorithm can be applied to solve a complex problem: image annotation, intended to be used in content based image retrieval systems.

  19. Noise-enhanced clustering and competitive learning algorithms.

    PubMed

    Osoba, Osonde; Kosko, Bart

    2013-01-01

    Noise can provably speed up convergence in many centroid-based clustering algorithms. This includes the popular k-means clustering algorithm. The clustering noise benefit follows from the general noise benefit for the expectation-maximization algorithm because many clustering algorithms are special cases of the expectation-maximization algorithm. Simulations show that noise also speeds up convergence in stochastic unsupervised competitive learning, supervised competitive learning, and differential competitive learning.

  20. Implementing the Deutsch-Jozsa algorithm with macroscopic ensembles

    NASA Astrophysics Data System (ADS)

    Semenenko, Henry; Byrnes, Tim

    2016-05-01

    Quantum computing implementations under consideration today typically deal with systems with microscopic degrees of freedom such as photons, ions, cold atoms, and superconducting circuits. The quantum information is stored typically in low-dimensional Hilbert spaces such as qubits, as quantum effects are strongest in such systems. It has, however, been demonstrated that quantum effects can be observed in mesoscopic and macroscopic systems, such as nanomechanical systems and gas ensembles. While few-qubit quantum information demonstrations have been performed with such macroscopic systems, a quantum algorithm showing exponential speedup over classical algorithms is yet to be shown. Here, we show that the Deutsch-Jozsa algorithm can be implemented with macroscopic ensembles. The encoding that we use avoids the detrimental effects of decoherence that normally plagues macroscopic implementations. We discuss two mapping procedures which can be chosen depending upon the constraints of the oracle and the experiment. Both methods have an exponential speedup over the classical case, and only require control of the ensembles at the level of the total spin of the ensembles. It is shown that both approaches reproduce the qubit Deutsch-Jozsa algorithm, and are robust under decoherence.

  1. Design and implementation of three-dimension texture mapping algorithm for panoramic system based on smart platform

    NASA Astrophysics Data System (ADS)

    Liu, Zhi; Zhou, Baotong; Zhang, Changnian

    2017-03-01

    Vehicle-mounted panoramic system is important safety assistant equipment for driving. However, traditional systems only render fixed top-down perspective view of limited view field, which may have potential safety hazard. In this paper, a texture mapping algorithm for 3D vehicle-mounted panoramic system is introduced, and an implementation of the algorithm utilizing OpenGL ES library based on Android smart platform is presented. Initial experiment results show that the proposed algorithm can render a good 3D panorama, and has the ability to change view point freely.

  2. Software For Genetic Algorithms

    NASA Technical Reports Server (NTRS)

    Wang, Lui; Bayer, Steve E.

    1992-01-01

    SPLICER computer program is genetic-algorithm software tool used to solve search and optimization problems. Provides underlying framework and structure for building genetic-algorithm application program. Written in Think C.

  3. Algorithm-development activities

    NASA Technical Reports Server (NTRS)

    Carder, Kendall L.

    1994-01-01

    The task of algorithm-development activities at USF continues. The algorithm for determining chlorophyll alpha concentration, (Chl alpha) and gelbstoff absorption coefficient for SeaWiFS and MODIS-N radiance data is our current priority.

  4. Wavelet speech enhancement algorithm using exponential semi-soft mask filtering.

    PubMed

    Lee, Gihyoun; Dae Na, Sung; Seong, KiWoong; Cho, Jin-Ho; Nam Kim, Myoung

    2016-09-02

    In this paper, we propose a new speech enhancement algorithm based on wavelet packet decomposition and mask filtering. In the traditional mask filtering such as ideal binary mask (IBM), the basic idea is to classify speech components as target signal and non-speech components as background noises. However, speech and non-speech components cannot be well separated in target signal and background noise. Therefore, the IBM has residual noise and signal loss. To overcome this problem, the proposed algorithm used semi-soft mask filter to exponentially increase. The semi-soft mask minimizes signal loss and the exponential filter removes residual noise. We performed experiments using various types of speech and noise signals, and experimental results show that the proposed algorithm achieves better performances than the traditional other speech enhancement algorithms.

  5. Improved motion contrast and processing efficiency in OCT angiography using complex-correlation algorithm

    NASA Astrophysics Data System (ADS)

    Guo, Li; Li, Pei; Pan, Cong; Liao, Rujia; Cheng, Yuxuan; Hu, Weiwei; Chen, Zhong; Ding, Zhihua; Li, Peng

    2016-02-01

    The complex-based OCT angiography (Angio-OCT) offers high motion contrast by combining both the intensity and phase information. However, due to involuntary bulk tissue motions, complex-valued OCT raw data are processed sequentially with different algorithms for correcting bulk image shifts (BISs), compensating global phase fluctuations (GPFs) and extracting flow signals. Such a complicated procedure results in massive computational load. To mitigate such a problem, in this work, we present an inter-frame complex-correlation (CC) algorithm. The CC algorithm is suitable for parallel processing of both flow signal extraction and BIS correction, and it does not need GPF compensation. This method provides high processing efficiency and shows superiority in motion contrast. The feasibility and performance of the proposed CC algorithm is demonstrated using both flow phantom and live animal experiments.

  6. Fast computing global structural balance in signed networks based on memetic algorithm

    NASA Astrophysics Data System (ADS)

    Sun, Yixiang; Du, Haifeng; Gong, Maoguo; Ma, Lijia; Wang, Shanfeng

    2014-12-01

    Structural balance is a large area of study in signed networks, and it is intrinsically a global property of the whole network. Computing global structural balance in signed networks, which has attracted some attention in recent years, is to measure how unbalanced a signed network is and it is a nondeterministic polynomial-time hard problem. Many approaches are developed to compute global balance. However, the results obtained by them are partial and unsatisfactory. In this study, the computation of global structural balance is solved as an optimization problem by using the Memetic Algorithm. The optimization algorithm, named Meme-SB, is proposed to optimize an evaluation function, energy function, which is used to compute a distance to exact balance. Our proposed algorithm combines Genetic Algorithm and a greedy strategy as the local search procedure. Experiments on social and biological networks show the excellent effectiveness and efficiency of the proposed method.

  7. A Speech Endpoint Detection Algorithm Based on BP Neural Network and Multiple Features

    NASA Astrophysics Data System (ADS)

    Shi, Yong-Qiang; Li, Ru-Wei; Zhang, Shuang; Wang, Shuai; Yi, Xiao-Qun

    Focusing on a sharp decline in the performance of endpoint detection algorithm in a complicated noise environment, a new speech endpoint detection method based on BPNN (back propagation neural network) and multiple features is presented. Firstly, maximum of short-time autocorrelation function and spectrum variance of speech signals are extracted respectively. Secondly, these feature vectors as the input of BP neural network are trained and modeled and then the Genetic Algorithm is used to optimize the BP Neural Network. Finally, the signal's type is determined according to the output of Neural Network. The experiments show that the correct rate of this proposed algorithm is improved, because this method has better robustness and adaptability than algorithm based on maximum of short-time autocorrelation function or spectrum variance.

  8. Motion Estimation Using the Firefly Algorithm in Ultrasonic Image Sequence of Soft Tissue

    PubMed Central

    Chao, Chih-Feng; Horng, Ming-Huwi; Chen, Yu-Chan

    2015-01-01

    Ultrasonic image sequence of the soft tissue is widely used in disease diagnosis; however, the speckle noises usually influenced the image quality. These images usually have a low signal-to-noise ratio presentation. The phenomenon gives rise to traditional motion estimation algorithms that are not suitable to measure the motion vectors. In this paper, a new motion estimation algorithm is developed for assessing the velocity field of soft tissue in a sequence of ultrasonic B-mode images. The proposed iterative firefly algorithm (IFA) searches for few candidate points to obtain the optimal motion vector, and then compares it to the traditional iterative full search algorithm (IFSA) via a series of experiments of in vivo ultrasonic image sequences. The experimental results show that the IFA can assess the vector with better efficiency and almost equal estimation quality compared to the traditional IFSA method. PMID:25873987

  9. A survey of the baseline correction algorithms for real-time spectroscopy processing

    NASA Astrophysics Data System (ADS)

    Liu, Yuanjie; Yu, Yude

    2016-11-01

    In spectroscopy data analysis, such as Raman spectra, X-ray diffraction, fluorescence and etc., baseline drift is a ubiquitous issue. In high speed testing which generating huge data, automatic baseline correction method is very important for efficient data processing. We will survey the algorithms from classical Shirley background to state-of-the-art methods to present a summation for this specific field. Both advantages and defects of each algorithm are scrutinized. To compare the algorithms with each other, experiments are also carried out under SVM gap gain criteria to show the performance quantitatively. Finally, a rank table of these methods is built and the suggestions for practical choice of adequate algorithms is provided in this paper.

  10. Advanced modularity-specialized label propagation algorithm for detecting communities in networks

    NASA Astrophysics Data System (ADS)

    Liu, X.; Murata, T.

    2010-04-01

    A modularity-specialized label propagation algorithm (LPAm) for detecting network communities was recently proposed. This promising algorithm offers some desirable qualities. However, LPAm favors community divisions where all communities are similar in total degree and thus it is prone to get stuck in poor local maxima in the modularity space. To escape local maxima, we employ a multistep greedy agglomerative algorithm (MSG) that can merge multiple pairs of communities at a time. Combining LPAm and MSG, we propose an advanced modularity-specialized label propagation algorithm (LPAm+). Experiments show that LPAm+ successfully detects communities with higher modularity values than ever reported in two commonly used real-world networks. Moreover, LPAm+ offers a fair compromise between accuracy and speed.

  11. The analysis of multigrid algorithms for pseudodifferential operators of order minus one

    SciTech Connect

    Bramble, J.H.; Leyk, Z.; Pasciak, J.E. ||

    1994-10-01

    Multigrid algorithms are developed to solve the discrete systems approximating the solutions of operator equations involving pseudodifferential operators of order minus one. Classical multigrid theory deals with the case of differential operators of positive order. The pseudodifferential operator gives rise to a coercive form on H{sup {minus}1/2}({Omega}). Effective multigrid algorithms are developed for this problem. These algorithms are novel in that they use the inner product on H{sup {minus}1}({Omega}) as a base inner product for the multigrid development. The authors show that the resulting rate of iterative convergence can, at worst, depend linearly on the number of levels in these novel multigrid algorithms. In addition, it is shown that the convergence rate is independent of the number of levels (and unknowns) in the case of a pseudodifferential operator defined by a single-layer potential. Finally, the results of numerical experiments illustrating the theory are presented. 19 refs., 1 fig., 2 tabs.

  12. Study on algorithm and real-time implementation of infrared image processing based on FPGA

    NASA Astrophysics Data System (ADS)

    Pang, Yulin; Ding, Ruijun; Liu, Shanshan; Chen, Zhe

    2010-10-01

    With the fast development of Infrared Focal Plane Arrays (IRFPA) detectors, high quality real-time image processing becomes more important in infrared imaging system. Facing the demand of better visual effect and good performance, we find FPGA is an ideal choice of hardware to realize image processing algorithm that fully taking advantage of its high speed, high reliability and processing a great amount of data in parallel. In this paper, a new idea of dynamic linear extension algorithm is introduced, which has the function of automatically finding the proper extension range. This image enhancement algorithm is designed in Verilog HDL and realized on FPGA. It works on higher speed than serial processing device like CPU and DSP. Experiment shows that this hardware unit of dynamic linear extension algorithm enhances the visual effect of infrared image effectively.

  13. Comparison between summing-up algorithms to determine areas of small peaks on high baselines

    NASA Astrophysics Data System (ADS)

    Shi, Quanlin; Zhang, Jiamei; Chang, Yongfu; Qian, Shaojun

    2005-12-01

    It is found that the minimum detectable activity (MDA) has a same tendency as the relative standard deviation (RSD) and a particular application is characteristic of the ratio of the peak area to the baseline height. Different applications need different algorithms to reduce the RSD of peak areas or the MDA of potential peaks. A model of Gaussian peaks superposed on linear baselines is established to simulate the multichannel spectrum and summing-up algorithms such as total peak area (TPA), and Covell and Sterlinski are compared to find the most appropriate algorithm for different applications. The results show that optimal Covell and Sterlinski algorithms will yield MDA or RSD half lower than TPA when the areas of small peaks on high baselines are to be determined. The conclusion is proved by experiment.

  14. Multi-Core Parallel Implementation of Data Filtering Algorithm for Multi-Beam Bathymetry Data

    NASA Astrophysics Data System (ADS)

    Liu, Tianyang; Xu, Weiming; Yin, Xiaodong; Zhao, Xiliang

    In order to improve the multi-beam bathymetry data processing speed, we propose a parallel filtering algorithm based on multi thread technology. The algorithm consists of two parts. The first is the parallel data re-order step, in which the surveying area is divided into a regular grid, and the discrete bathymetry data is arranged into each grid by parallel method. The second part is the parallel filtering step, which involves dividing the grid into blocks and parallel executing filtering process in each block. In the experiment, the speedup of the proposed algorithm reaches to about 3.67 with an 8 core computer. The result shows the method can improve computing efficiency significantly comparing to the traditional algorithm.

  15. A Short Survey of Document Structure Similarity Algorithms

    SciTech Connect

    Buttler, D

    2004-02-27

    This paper provides a brief survey of document structural similarity algorithms, including the optimal Tree Edit Distance algorithm and various approximation algorithms. The approximation algorithms include the simple weighted tag similarity algorithm, Fourier transforms of the structure, and a new application of the shingle technique to structural similarity. We show three surprising results. First, the Fourier transform technique proves to be the least accurate of any of approximation algorithms, while also being slowest. Second, optimal Tree Edit Distance algorithms may not be the best technique for clustering pages from different sites. Third, the simplest approximation to structure may be the most effective and efficient mechanism for many applications.

  16. An ROLAP Aggregation Algorithm with the Rules Being Specified

    NASA Astrophysics Data System (ADS)

    Zhengqiu, Weng; Tai, Kuang; Lina, Zhang

    This paper introduces the base theory of data warehouse and ROLAP, and presents a new kind of ROLAP aggregation algorithm, which has calculation algorithms. It covers the shortage of low accuracy of traditional aggregation algorithm that aggregates only by addition. The ROLAP aggregation with calculation algorithm which can aggregate according to business rules improves accuracy. And key designs and procedures are presented. Compared with the traditional method, its efficiency is displayed in an experiment.

  17. An Algorithm for Finding the Singleton Attractors and Pre-Images in Strong-Inhibition Boolean Networks.

    PubMed

    He, Zhiwei; Zhan, Meng; Liu, Shuai; Fang, Zebo; Yao, Chenggui

    2016-01-01

    The detection of the singleton attractors is of great significance for the systematic study of genetic regulatory network. In this paper, we design an algorithm to compute the singleton attractors and pre-images of the strong-inhibition Boolean networks which is a biophysically plausible gene model. Our algorithm can not only identify accurately the singleton attractors, but also find easily the pre-images of the network. Based on extensive computational experiments, we show that the computational time of the algorithm is proportional to the number of the singleton attractors, which indicates the algorithm has much advantage in finding the singleton attractors for the networks with high average degree and less inhibitory interactions. Our algorithm may shed light on understanding the function and structure of the strong-inhibition Boolean networks.

  18. A novel harmony search algorithm based on teaching-learning strategies for 0-1 knapsack problems.

    PubMed

    Tuo, Shouheng; Yong, Longquan; Deng, Fang'an

    2014-01-01

    To enhance the performance of harmony search (HS) algorithm on solving the discrete optimization problems, this paper proposes a novel harmony search algorithm based on teaching-learning (HSTL) strategies to solve 0-1 knapsack problems. In the HSTL algorithm, firstly, a method is presented to adjust dimension dynamically for selected harmony vector in optimization procedure. In addition, four strategies (harmony memory consideration, teaching-learning strategy, local pitch adjusting, and random mutation) are employed to improve the performance of HS algorithm. Another improvement in HSTL method is that the dynamic strategies are adopted to change the parameters, which maintains the proper balance effectively between global exploration power and local exploitation power. Finally, simulation experiments with 13 knapsack problems show that the HSTL algorithm can be an efficient alternative for solving 0-1 knapsack problems.

  19. An Algorithm for Finding the Singleton Attractors and Pre-Images in Strong-Inhibition Boolean Networks

    PubMed Central

    He, Zhiwei; Zhan, Meng; Liu, Shuai; Fang, Zebo; Yao, Chenggui

    2016-01-01

    The detection of the singleton attractors is of great significance for the systematic study of genetic regulatory network. In this paper, we design an algorithm to compute the singleton attractors and pre-images of the strong-inhibition Boolean networks which is a biophysically plausible gene model. Our algorithm can not only identify accurately the singleton attractors, but also find easily the pre-images of the network. Based on extensive computational experiments, we show that the computational time of the algorithm is proportional to the number of the singleton attractors, which indicates the algorithm has much advantage in finding the singleton attractors for the networks with high average degree and less inhibitory interactions. Our algorithm may shed light on understanding the function and structure of the strong-inhibition Boolean networks. PMID:27861624

  20. A Hybrid Method for Image Segmentation Based on Artificial Fish Swarm Algorithm and Fuzzy c-Means Clustering.

    PubMed

    Ma, Li; Li, Yang; Fan, Suohai; Fan, Runzhu

    2015-01-01

    Image segmentation plays an important role in medical image processing. Fuzzy c-means (FCM) clustering is one of the popular clustering algorithms for medical image segmentation. However, FCM has the problems of depending on initial clustering centers, falling into local optimal solution easily, and sensitivity to noise disturbance. To solve these problems, this paper proposes a hybrid artificial fish swarm algorithm (HAFSA). The proposed algorithm combines artificial fish swarm algorithm (AFSA) with FCM whose advantages of global optimization searching and parallel computing ability of AFSA are utilized to find a superior result. Meanwhile, Metropolis criterion and noise reduction mechanism are introduced to AFSA for enhancing the convergence rate and antinoise ability. The artificial grid graph and Magnetic Resonance Imaging (MRI) are used in the experiments, and the experimental results show that the proposed algorithm has stronger antinoise ability and higher precision. A number of evaluation indicators also demonstrate that the effect of HAFSA is more excellent than FCM and suppressed FCM (SFCM).

  1. A Novel Harmony Search Algorithm Based on Teaching-Learning Strategies for 0-1 Knapsack Problems

    PubMed Central

    Tuo, Shouheng; Yong, Longquan; Deng, Fang'an

    2014-01-01

    To enhance the performance of harmony search (HS) algorithm on solving the discrete optimization problems, this paper proposes a novel harmony search algorithm based on teaching-learning (HSTL) strategies to solve 0-1 knapsack problems. In the HSTL algorithm, firstly, a method is presented to adjust dimension dynamically for selected harmony vector in optimization procedure. In addition, four strategies (harmony memory consideration, teaching-learning strategy, local pitch adjusting, and random mutation) are employed to improve the performance of HS algorithm. Another improvement in HSTL method is that the dynamic strategies are adopted to change the parameters, which maintains the proper balance effectively between global exploration power and local exploitation power. Finally, simulation experiments with 13 knapsack problems show that the HSTL algorithm can be an efficient alternative for solving 0-1 knapsack problems. PMID:24574905

  2. Accurate Finite Difference Algorithms

    NASA Technical Reports Server (NTRS)

    Goodrich, John W.

    1996-01-01

    Two families of finite difference algorithms for computational aeroacoustics are presented and compared. All of the algorithms are single step explicit methods, they have the same order of accuracy in both space and time, with examples up to eleventh order, and they have multidimensional extensions. One of the algorithm families has spectral like high resolution. Propagation with high order and high resolution algorithms can produce accurate results after O(10(exp 6)) periods of propagation with eight grid points per wavelength.

  3. Quantum algorithms: an overview

    NASA Astrophysics Data System (ADS)

    Montanaro, Ashley

    2016-01-01

    Quantum computers are designed to outperform standard computers by running quantum algorithms. Areas in which quantum algorithms can be applied include cryptography, search and optimisation, simulation of quantum systems and solving large systems of linear equations. Here we briefly survey some known quantum algorithms, with an emphasis on a broad overview of their applications rather than their technical details. We include a discussion of recent developments and near-term applications of quantum algorithms.

  4. INSENS classification algorithm report

    SciTech Connect

    Hernandez, J.E.; Frerking, C.J.; Myers, D.W.

    1993-07-28

    This report describes a new algorithm developed for the Imigration and Naturalization Service (INS) in support of the INSENS project for classifying vehicles and pedestrians using seismic data. This algorithm is less sensitive to nuisance alarms due to environmental events than the previous algorithm. Furthermore, the algorithm is simple enough that it can be implemented in the 8-bit microprocessor used in the INSENS system.

  5. Heuristic-based tabu search algorithm for folding two-dimensional AB off-lattice model proteins.

    PubMed

    Liu, Jingfa; Sun, Yuanyuan; Li, Gang; Song, Beibei; Huang, Weibo

    2013-12-01

    The protein structure prediction problem is a classical NP hard problem in bioinformatics. The lack of an effective global optimization method is the key obstacle in solving this problem. As one of the global optimization algorithms, tabu search (TS) algorithm has been successfully applied in many optimization problems. We define the new neighborhood conformation, tabu object and acceptance criteria of current conformation based on the original TS algorithm and put forward an improved TS algorithm. By integrating the heuristic initialization mechanism, the heuristic conformation updating mechanism, and the gradient method into the improved TS algorithm, a heuristic-based tabu search (HTS) algorithm is presented for predicting the two-dimensional (2D) protein folding structure in AB off-lattice model which consists of hydrophobic (A) and hydrophilic (B) monomers. The tabu search minimization leads to the basins of local minima, near which a local search mechanism is then proposed to further search for lower-energy conformations. To test the performance of the proposed algorithm, experiments are performed on four Fibonacci sequences and two real protein sequences. The experimental results show that the proposed algorithm has found the lowest-energy conformations so far for three shorter Fibonacci sequences and renewed the results for the longest one, as well as two real protein sequences, demonstrating that the HTS algorithm is quite promising in finding the ground states for AB off-lattice model proteins.

  6. mRMR-ABC: A Hybrid Gene Selection Algorithm for Cancer Classification Using Microarray Gene Expression Profiling.

    PubMed

    Alshamlan, Hala; Badr, Ghada; Alohali, Yousef

    2015-01-01

    An artificial bee colony (ABC) is a relatively recent swarm intelligence optimization approach. In this paper, we propose the first attempt at applying ABC algorithm in analyzing a microarray gene expression profile. In addition, we propose an innovative feature selection algorithm, minimum redundancy maximum relevance (mRMR), and combine it with an ABC algorithm, mRMR-ABC, to select informative genes from microarray profile. The new approach is based on a support vector machine (SVM) algorithm to measure the classification accuracy for selected genes. We evaluate the performance of the proposed mRMR-ABC algorithm by conducting extensive experiments on six binary and multiclass gene expression microarray datasets. Furthermore, we compare our proposed mRMR-ABC algorithm with previously known techniques. We reimplemented two of these techniques for the sake of a fair comparison using the same parameters. These two techniques are mRMR when combined with a genetic algorithm (mRMR-GA) and mRMR when combined with a particle swarm optimization algorithm (mRMR-PSO). The experimental results prove that the proposed mRMR-ABC algorithm achieves accurate classification performance using small number of predictive genes when tested using both datasets and compared to previously suggested methods. This shows that mRMR-ABC is a promising approach for solving gene selection and cancer classification problems.

  7. A Novel Zero Velocity Interval Detection Algorithm for Self-Contained Pedestrian Navigation System with Inertial Sensors

    PubMed Central

    Tian, Xiaochun; Chen, Jiabin; Han, Yongqiang; Shang, Jianyu; Li, Nan

    2016-01-01

    Zero velocity update (ZUPT) plays an important role in pedestrian navigation algorithms with the premise that the zero velocity interval (ZVI) should be detected accurately and effectively. A novel adaptive ZVI detection algorithm based on a smoothed pseudo Wigner–Ville distribution to remove multiple frequencies intelligently (SPWVD-RMFI) is proposed in this paper. The novel algorithm adopts the SPWVD-RMFI method to extract the pedestrian gait frequency and to calculate the optimal ZVI detection threshold in real time by establishing the function relationships between the thresholds and the gait frequency; then, the adaptive adjustment of thresholds with gait frequency is realized and improves the ZVI detection precision. To put it into practice, a ZVI detection experiment is carried out; the result shows that compared with the traditional fixed threshold ZVI detection method, the adaptive ZVI detection algorithm can effectively reduce the false and missed detection rate of ZVI; this indicates that the novel algorithm has high detection precision and good robustness. Furthermore, pedestrian trajectory positioning experiments at different walking speeds are carried out to evaluate the influence of the novel algorithm on positioning precision. The results show that the ZVI detected by the adaptive ZVI detection algorithm for pedestrian trajectory calculation can achieve better performance. PMID:27669266

  8. Clustering algorithm studies

    NASA Astrophysics Data System (ADS)

    Graf, Norman A.

    2001-07-01

    An object-oriented framework for undertaking clustering algorithm studies has been developed. We present here the definitions for the abstract Cells and Clusters as well as the interface for the algorithm. We intend to use this framework to investigate the interplay between various clustering algorithms and the resulting jet reconstruction efficiency and energy resolutions to assist in the design of the calorimeter detector.

  9. SU-E-T-371: Evaluating the Convolution Algorithm of a Commercially Available Radiosurgery Irradiator Using a Novel Phantom

    SciTech Connect

    Cates, J; Drzymala, R

    2015-06-15

    Purpose: The purpose of this study was to develop and use a novel phantom to evaluate the accuracy and usefulness of the Leskell Gamma Plan convolution-based dose calculation algorithm compared with the current TMR10 algorithm. Methods: A novel phantom was designed to fit the Leskell Gamma Knife G Frame which could accommodate various materials in the form of one inch diameter, cylindrical plugs. The plugs were split axially to allow EBT2 film placement. Film measurements were made during two experiments. The first utilized plans generated on a homogeneous acrylic phantom setup using the TMR10 algorithm, with various materials inserted into the phantom during film irradiation to assess the effect on delivered dose due to unplanned heterogeneities upstream in the beam path. The second experiment utilized plans made on CT scans of different heterogeneous setups, with one plan using the TMR10 dose calculation algorithm and the second using the convolution-based algorithm. Materials used to introduce heterogeneities included air, LDPE, polystyrene, Delrin, Teflon, and aluminum. Results: The data shows that, as would be expected, having heterogeneities in the beam path does induce dose delivery error when using the TMR10 algorithm, with the largest errors being due to the heterogeneities with electron densities most different from that of water, i.e. air, Teflon, and aluminum. Additionally, the Convolution algorithm did account for the heterogeneous material and provided a more accurate predicted dose, in extreme cases up to a 7–12% improvement over the TMR10 algorithm. The convolution algorithm expected dose was accurate to within 3% in all cases. Conclusion: This study proves that the convolution algorithm is an improvement over the TMR10 algorithm when heterogeneities are present. More work is needed to determine what the heterogeneity size/volume limits are where this improvement exists, and in what clinical and/or research cases this would be relevant.

  10. Listless zerotree image compression algorithm

    NASA Astrophysics Data System (ADS)

    Lian, Jing; Wang, Ke

    2006-09-01

    In this paper, an improved zerotree structure and a new coding procedure are adopted, which improve the reconstructed image qualities. Moreover, the lists in SPIHT are replaced by flag maps, and lifting scheme is adopted to realize wavelet transform, which lowers the memory requirements and speeds up the coding process. Experimental results show that the algorithm is more effective and efficient compared with SPIHT.

  11. Improvements to previous algorithms to predict gene structure and isoform concentrations using Affymetrix Exon arrays

    PubMed Central

    2010-01-01

    Background Exon arrays provide a way to measure the expression of different isoforms of genes in an organism. Most of the procedures to deal with these arrays are focused on gene expression or on exon expression. Although the only biological analytes that can be properly assigned a concentration are transcripts, there are very few algorithms that focus on them. The reason is that previously developed summarization methods do not work well if applied to transcripts. In addition, gene structure prediction, i.e., the correspondence between probes and novel isoforms, is a field which is still unexplored. Results We have modified and adapted a previous algorithm to take advantage of the special characteristics of the Affymetrix exon arrays. The structure and concentration of transcripts -some of them possibly unknown- in microarray experiments were predicted using this algorithm. Simulations showed that the suggested modifications improved both specificity (SP) and sensitivity (ST) of the predictions. The algorithm was also applied to different real datasets showing its effectiveness and the concordance with PCR validated results. Conclusions The proposed algorithm shows a substantial improvement in the performance over the previous version. This improvement is mainly due to the exploitation of the redundancy of the Affymetrix exon arrays. An R-Package of SPACE with the updated algorithms have been developed and is freely available. PMID:21110835

  12. Influence of Fiber Bragg Grating Spectrum Degradation on the Performance of Sensor Interrogation Algorithms

    PubMed Central

    Lamberti, Alfredo; Vanlanduit, Steve; De Pauw, Ben; Berghmans, Francis

    2014-01-01

    The working principle of fiber Bragg grating (FBG) sensors is mostly based on the tracking of the Bragg wavelength shift. To accomplish this task, different algorithms have been proposed, from conventional maximum and centroid detection algorithms to more recently-developed correlation-based techniques. Several studies regarding the performance of these algorithms have been conducted, but they did not take into account spectral distortions, which appear in many practical applications. This paper addresses this issue and analyzes the performance of four different wavelength tracking algorithms (maximum detection, centroid detection, cross-correlation and fast phase-correlation) when applied to distorted FBG spectra used for measuring dynamic loads. Both simulations and experiments are used for the analyses. The dynamic behavior of distorted FBG spectra is simulated using the transfer-matrix approach, and the amount of distortion of the spectra is quantified using dedicated distortion indices. The algorithms are compared in terms of achievable precision and accuracy. To corroborate the simulation results, experiments were conducted using three FBG sensors glued on a steel plate and subjected to a combination of transverse force and vibration loads. The analysis of the results showed that the fast phase-correlation algorithm guarantees the best combination of versatility, precision and accuracy. PMID:25521386

  13. Adaptive-feedback control algorithm.

    PubMed

    Huang, Debin

    2006-06-01

    This paper is motivated by giving the detailed proofs and some interesting remarks on the results the author obtained in a series of papers [Phys. Rev. Lett. 93, 214101 (2004); Phys. Rev. E 71, 037203 (2005); 69, 067201 (2004)], where an adaptive-feedback algorithm was proposed to effectively stabilize and synchronize chaotic systems. This note proves in detail the strictness of this algorithm from the viewpoint of mathematics, and gives some interesting remarks for its potential applications to chaos control & synchronization. In addition, a significant comment on synchronization-based parameter estimation is given, which shows some techniques proposed in literature less strict and ineffective in some cases.

  14. Gossip algorithms in quantum networks

    NASA Astrophysics Data System (ADS)

    Siomau, Michael

    2017-01-01

    Gossip algorithms is a common term to describe protocols for unreliable information dissemination in natural networks, which are not optimally designed for efficient communication between network entities. We consider application of gossip algorithms to quantum networks and show that any quantum network can be updated to optimal configuration with local operations and classical communication. This allows to speed-up - in the best case exponentially - the quantum information dissemination. Irrespective of the initial configuration of the quantum network, the update requiters at most polynomial number of local operations and classical communication.

  15. Motion Cueing Algorithm Development: Initial Investigation and Redesign of the Algorithms

    NASA Technical Reports Server (NTRS)

    Telban, Robert J.; Wu, Weimin; Cardullo, Frank M.; Houck, Jacob A. (Technical Monitor)

    2000-01-01

    In this project four motion cueing algorithms were initially investigated. The classical algorithm generated results with large distortion and delay and low magnitude. The NASA adaptive algorithm proved to be well tuned with satisfactory performance, while the UTIAS adaptive algorithm produced less desirable results. Modifications were made to the adaptive algorithms to reduce the magnitude of undesirable spikes. The optimal algorithm was found to have the potential for improved performance with further redesign. The center of simulator rotation was redefined. More terms were added to the cost function to enable more tuning flexibility. A new design approach using a Fortran/Matlab/Simulink setup was employed. A new semicircular canals model was incorporated in the algorithm. With these changes results show the optimal algorithm has some advantages over the NASA adaptive algorithm. Two general problems observed in the initial investigation required solutions. A nonlinear gain algorithm was developed that scales the aircraft inputs by a third-order polynomial, maximizing the motion cues while remaining within the operational limits of the motion system. A braking algorithm was developed to bring the simulator to a full stop at its motion limit and later release the brake to follow the cueing algorithm output.

  16. Monte Carlo algorithm for free energy calculation.

    PubMed

    Bi, Sheng; Tong, Ning-Hua

    2015-07-01

    We propose a Monte Carlo algorithm for the free energy calculation based on configuration space sampling. An upward or downward temperature scan can be used to produce F(T). We implement this algorithm for the Ising model on a square lattice and triangular lattice. Comparison with the exact free energy shows an excellent agreement. We analyze the properties of this algorithm and compare it with the Wang-Landau algorithm, which samples in energy space. This method is applicable to general classical statistical models. The possibility of extending it to quantum systems is discussed.

  17. Solving Maximal Clique Problem through Genetic Algorithm

    NASA Astrophysics Data System (ADS)

    Rajawat, Shalini; Hemrajani, Naveen; Menghani, Ekta

    2010-11-01

    Genetic algorithm is one of the most interesting heuristic search techniques. It depends basically on three operations; selection, crossover and mutation. The outcome of the three operations is a new population for the next generation. Repeating these operations until the termination condition is reached. All the operations in the algorithm are accessible with today's molecular biotechnology. The simulations show that with this new computing algorithm, it is possible to get a solution from a very small initial data pool, avoiding enumerating all candidate solutions. For randomly generated problems, genetic algorithm can give correct solution within a few cycles at high probability.

  18. Thermostat algorithm for generating target ensembles.

    PubMed

    Bravetti, A; Tapias, D

    2016-02-01

    We present a deterministic algorithm called contact density dynamics that generates any prescribed target distribution in the physical phase space. Akin to the famous model of Nosé and Hoover, our algorithm is based on a non-Hamiltonian system in an extended phase space. However, the equations of motion in our case follow from contact geometry and we show that in general they have a similar form to those of the so-called density dynamics algorithm. As a prototypical example, we apply our algorithm to produce a Gibbs canonical distribution for a one-dimensional harmonic oscillator.

  19. Malleable Fuzzy Local Median C Means Algorithm for Effective Biomedical Image Segmentation

    NASA Astrophysics Data System (ADS)

    Rajendran, Arunkumar; Balakrishnan, Nagaraj; Varatharaj, Mithya

    2016-12-01

    The traditional way of clustering plays an effective role in the field of segmentation which was developed to be more effective and also in the recent development the extraction of contextual information can be processed with ease. This paper presents a modified Fuzzy C-Means (FCM) algorithm that provides the better segmentation in the contour grayscale regions of the biomedical images where effective cluster is needed. Malleable Fuzzy Local Median C-Means (M-FLMCM) is the proposed algorithm, proposed to overcome the disadvantage of the traditional FCM method in which the convergence time requirement is more, lack of ability to remove the noise, and the inability to cluster the contour region such as images. M-FLMCM shows promising results in the experiment with real-world biomedical images. The experiment results, with 96 % accuracy compared to the other algorithms.

  20. Runtime support for parallelizing data mining algorithms

    NASA Astrophysics Data System (ADS)

    Jin, Ruoming; Agrawal, Gagan

    2002-03-01

    With recent technological advances, shared memory parallel machines have become more scalable, and offer large main memories and high bus bandwidths. They are emerging as good platforms for data warehousing and data mining. In this paper, we focus on shared memory parallelization of data mining algorithms. We have developed a series of techniques for parallelization of data mining algorithms, including full replication, full locking, fixed locking, optimized full locking, and cache-sensitive locking. Unlike previous work on shared memory parallelization of specific data mining algorithms, all of our techniques apply to a large number of common data mining algorithms. In addition, we propose a reduction-object based interface for specifying a data mining algorithm. We show how our runtime system can apply any of the technique we have developed starting from a common specification of the algorithm.

  1. Genetic algorithms with memory- and elitism-based immigrants in dynamic environments.

    PubMed

    Yang, Shengxiang

    2008-01-01

    In recent years the genetic algorithm community has shown a growing interest in studying dynamic optimization problems. Several approaches have been devised. The random immigrants and memory schemes are two major ones. The random immigrants scheme addresses dynamic environments by maintaining the population diversity while the memory scheme aims to adapt genetic algorithms quickly to new environments by reusing historical information. This paper investigates a hybrid memory and random immigrants scheme, called memory-based immigrants, and a hybrid elitism and random immigrants scheme, called elitism-based immigrants, for genetic algorithms in dynamic environments. In these schemes, the best individual from memory or the elite from the previous generation is retrieved as the base to create immigrants into the population by mutation. This way, not only can diversity be maintained but it is done more efficiently to adapt genetic algorithms to the current environment. Based on a series of systematically constructed dynamic problems, experiments are carried out to compare genetic algorithms with the memory-based and elitism-based immigrants schemes against genetic algorithms with traditional memory and random immigrants schemes and a hybrid memory and multi-population scheme. The sensitivity analysis regarding some key parameters is also carried out. Experimental results show that the memory-based and elitism-based immigrants schemes efficiently improve the performance of genetic algorithms in dynamic environments.

  2. A Fast Color Image Encryption Algorithm Using 4-Pixel Feistel Structure

    PubMed Central

    Yao, Wang; Wu, Faguo; Zhang, Xiao; Zheng, Zhiming; Wang, Zhao; Wang, Wenhua; Qiu, Wangjie

    2016-01-01

    Algorithms using 4-pixel Feistel structure and chaotic systems have been shown to resolve security problems caused by large data capacity and high correlation among pixels for color image encryption. In this paper, a fast color image encryption algorithm based on the modified 4-pixel Feistel structure and multiple chaotic maps is proposed to improve the efficiency of this type of algorithm. Two methods are used. First, a simple round function based on a piecewise linear function and tent map are used to reduce computational cost during each iteration. Second, the 4-pixel Feistel structure reduces round number by changing twist direction securely to help the algorithm proceed efficiently. While a large number of simulation experiments prove its security performance, additional special analysis and a corresponding speed simulation show that these two methods increase the speed of the proposed algorithm (0.15s for a 256*256 color image) to twice that of an algorithm with a similar structure (0.37s for the same size image). Additionally, the method is also faster than other recently proposed algorithms. PMID:27824894

  3. A Fast Color Image Encryption Algorithm Using 4-Pixel Feistel Structure.

    PubMed

    Yao, Wang; Wu, Faguo; Zhang, Xiao; Zheng, Zhiming; Wang, Zhao; Wang, Wenhua; Qiu, Wangjie

    2016-01-01

    Algorithms using 4-pixel Feistel structure and chaotic systems have been shown to resolve security problems caused by large data capacity and high correlation among pixels for color image encryption. In this paper, a fast color image encryption algorithm based on the modified 4-pixel Feistel structure and multiple chaotic maps is proposed to improve the efficiency of this type of algorithm. Two methods are used. First, a simple round function based on a piecewise linear function and tent map are used to reduce computational cost during each iteration. Second, the 4-pixel Feistel structure reduces round number by changing twist direction securely to help the algorithm proceed efficiently. While a large number of simulation experiments prove its security performance, additional special analysis and a corresponding speed simulation show that these two methods increase the speed of the proposed algorithm (0.15s for a 256*256 color image) to twice that of an algorithm with a similar structure (0.37s for the same size image). Additionally, the method is also faster than other recently proposed algorithms.

  4. Discrete artificial bee colony algorithm for lot-streaming flowshop with total flowtime minimization

    NASA Astrophysics Data System (ADS)

    Sang, Hongyan; Gao, Liang; Pan, Quanke

    2012-09-01

    Unlike a traditional flowshop problem where a job is assumed to be indivisible, in the lot-streaming flowshop problem, a job is allowed to overlap its operations between successive machines by splitting it into a number of smaller sub-lots and moving the completed portion of the sub-lots to downstream machine. In this way, the production is accelerated. This paper presents a discrete artificial bee colony (DABC) algorithm for a lot-streaming flowshop scheduling problem with total flowtime criterion. Unlike the basic ABC algorithm, the proposed DABC algorithm represents a solution as a discrete job permutation. An efficient initialization scheme based on the extended Nawaz-Enscore-Ham heuristic is utilized to produce an initial population with a certain level of quality and diversity. Employed and onlooker bees generate new solutions in their neighborhood, whereas scout bees generate new solutions by performing insert operator and swap operator to the best solution found so far. Moreover, a simple but effective local search is embedded in the algorithm to enhance local exploitation capability. A comparative experiment is carried out with the existing discrete particle swarm optimization, hybrid genetic algorithm, threshold accepting, simulated annealing and ant colony optimization algorithms based on a total of 160 randomly generated instances. The experimental results show that the proposed DABC algorithm is quite effective for the lot-streaming flowshop with total flowtime criterion in terms of searching quality, robustness and effectiveness. This research provides the references to the optimization research on lot-streaming flowshop.

  5. A Modified MinMax k-Means Algorithm Based on PSO.

    PubMed

    Wang, Xiaoyan; Bai, Yanping

    The MinMax k-means algorithm is widely used to tackle the effect of bad initialization by minimizing the maximum intraclustering errors. Two parameters, including the exponent parameter and memory parameter, are involved in the executive process. Since different parameters have different clustering errors, it is crucial to choose appropriate parameters. In the original algorithm, a practical framework is given. Such framework extends the MinMax k-means to automatically adapt the exponent parameter to the data set. It has been believed that if the maximum exponent parameter has been set, then the programme can reach the lowest intraclustering errors. However, our experiments show that this is not always correct. In this paper, we modified the MinMax k-means algorithm by PSO to determine the proper values of parameters which can subject the algorithm to attain the lowest clustering errors. The proposed clustering method is tested on some favorite data sets in several different initial situations and is compared to the k-means algorithm and the original MinMax k-means algorithm. The experimental results indicate that our proposed algorithm can reach the lowest clustering errors automatically.

  6. Improved Quantum Artificial Fish Algorithm Application to Distributed Network Considering Distributed Generation

    PubMed Central

    Du, Tingsong; Hu, Yang; Ke, Xianting

    2015-01-01

    An improved quantum artificial fish swarm algorithm (IQAFSA) for solving distributed network programming considering distributed generation is proposed in this work. The IQAFSA based on quantum computing which has exponential acceleration for heuristic algorithm uses quantum bits to code artificial fish and quantum revolving gate, preying behavior, and following behavior and variation of quantum artificial fish to update the artificial fish for searching for optimal value. Then, we apply the proposed new algorithm, the quantum artificial fish swarm algorithm (QAFSA), the basic artificial fish swarm algorithm (BAFSA), and the global edition artificial fish swarm algorithm (GAFSA) to the simulation experiments for some typical test functions, respectively. The simulation results demonstrate that the proposed algorithm can escape from the local extremum effectively and has higher convergence speed and better accuracy. Finally, applying IQAFSA to distributed network problems and the simulation results for 33-bus radial distribution network system show that IQAFSA can get the minimum power loss after comparing with BAFSA, GAFSA, and QAFSA. PMID:26447713

  7. Protein threading with profiles and distance constraints using clique based algorithms.

    PubMed

    Dukka, Bahadur K C; Tomita, Etsuji; Suzuki, Jun'ichi; Horimoto, Katsuhisa; Akutsu, Tatsuya

    2006-02-01

    With the advent of experimental technologies like chemical cross-linking, it has become possible to obtain distances between specific residues of a newly sequenced protein. These types of experiments usually are less time consuming than X-ray crystallography or NMR. Consequently, it is highly desired to develop a method that incorporates this distance information to improve the performance of protein threading methods. However, protein threading with profiles in which constraints on distances between residues are given is known to be NP-hard. By using the notion of a maximum edge-weight clique finding algorithm, we introduce a more efficient method called FTHREAD for profile threading with distance constraints that is 18 times faster than its predecessor CLIQUETHREAD. Moreover, we also present a novel practical algorithm NTHREAD for profile threading with Non-strict constraints. The overall performance of FTHREAD on a data set shows that although our algorithm uses a simple threading function, our algorithm performs equally well as some of the existing methods. Particularly, when there are some unsatisfied constraints, NTHREAD (Non-strict constraints threading algorithm) performs better than threading with FTHREAD (Strict constraints threading algorithm). We have also analyzed the effects of using a number of distance constraints. This algorithm helps the enhancement of alignment quality between the query sequence and template structure, once the corresponding template structure is determined for the target sequence.

  8. A Modified MinMax k-Means Algorithm Based on PSO

    PubMed Central

    2016-01-01

    The MinMax k-means algorithm is widely used to tackle the effect of bad initialization by minimizing the maximum intraclustering errors. Two parameters, including the exponent parameter and memory parameter, are involved in the executive process. Since different parameters have different clustering errors, it is crucial to choose appropriate parameters. In the original algorithm, a practical framework is given. Such framework extends the MinMax k-means to automatically adapt the exponent parameter to the data set. It has been believed that if the maximum exponent parameter has been set, then the programme can reach the lowest intraclustering errors. However, our experiments show that this is not always correct. In this paper, we modified the MinMax k-means algorithm by PSO to determine the proper values of parameters which can subject the algorithm to attain the lowest clustering errors. The proposed clustering method is tested on some favorite data sets in several different initial situations and is compared to the k-means algorithm and the original MinMax k-means algorithm. The experimental results indicate that our proposed algorithm can reach the lowest clustering errors automatically. PMID:27656201

  9. Solving the depth of the repeated texture areas based on the clustering algorithm

    NASA Astrophysics Data System (ADS)

    Xiong, Zhang; Zhang, Jun; Tian, Jinwen

    2015-12-01

    The reconstruction of the 3D scene in the monocular stereo vision needs to get the depth of the field scenic points in the picture scene. But there will inevitably be error matching in the process of image matching, especially when there are a large number of repeat texture areas in the images, there will be lots of error matches. At present, multiple baseline stereo imaging algorithm is commonly used to eliminate matching error for repeated texture areas. This algorithm can eliminate the ambiguity correspond to common repetition texture. But this algorithm has restrictions on the baseline, and has low speed. In this paper, we put forward an algorithm of calculating the depth of the matching points in the repeat texture areas based on the clustering algorithm. Firstly, we adopt Gauss Filter to preprocess the images. Secondly, we segment the repeated texture regions in the images into image blocks by using spectral clustering segmentation algorithm based on super pixel and tag the image blocks. Then, match the two images and solve the depth of the image. Finally, the depth of the image blocks takes the median in all depth values of calculating point in the bock. So the depth of repeated texture areas is got. The results of a lot of image experiments show that the effect of our algorithm for calculating the depth of repeated texture areas is very good.

  10. Learning with the ratchet algorithm.

    SciTech Connect

    Hush, D. R.; Scovel, James C.

    2003-01-01

    This paper presents a randomized algorithm called Ratchet that asymptotically minimizes (with probability 1) functions that satisfy a positive-linear-dependent (PLD) property. We establish the PLD property and a corresponding realization of Ratchet for a generalized loss criterion for both linear machines and linear classifiers. We describe several learning criteria that can be obtained as special cases of this generalized loss criterion, e.g. classification error, classification loss and weighted classification error. We also establish the PLD property and a corresponding realization of Ratchet for the Neyman-Pearson criterion for linear classifiers. Finally we show how, for linear classifiers, the Ratchet algorithm can be derived as a modification of the Pocket algorithm.

  11. SDR Input Power Estimation Algorithms

    NASA Technical Reports Server (NTRS)

    Nappier, Jennifer M.; Briones, Janette C.

    2013-01-01

    The General Dynamics (GD) S-Band software defined radio (SDR) in the Space Communications and Navigation (SCAN) Testbed on the International Space Station (ISS) provides experimenters an opportunity to develop and demonstrate experimental waveforms in space. The SDR has an analog and a digital automatic gain control (AGC) and the response of the AGCs to changes in SDR input power and temperature was characterized prior to the launch and installation of the SCAN Testbed on the ISS. The AGCs were used to estimate the SDR input power and SNR of the received signal and the characterization results showed a nonlinear response to SDR input power and temperature. In order to estimate the SDR input from the AGCs, three algorithms were developed and implemented on the ground software of the SCAN Testbed. The algorithms include a linear straight line estimator, which used the digital AGC and the temperature to estimate the SDR input power over a narrower section of the SDR input power range. There is a linear adaptive filter algorithm that uses both AGCs and the temperature to estimate the SDR input power over a wide input power range. Finally, an algorithm that uses neural networks was designed to estimate the input power over a wide range. This paper describes the algorithms in detail and their associated performance in estimating the SDR input power.

  12. Parallel job-scheduling algorithms

    SciTech Connect

    Rodger, S.H.

    1989-01-01

    In this thesis, we consider solving job scheduling problems on the CREW PRAM model. We show how to adapt Cole's pipeline merge technique to yield several efficient parallel algorithms for a number of job scheduling problems and one optimal parallel algorithm for the following job scheduling problem: Given a set of n jobs defined by release times, deadlines and processing times, find a schedule that minimizes the maximum lateness of the jobs and allows preemption when the jobs are scheduled to run on one machine. In addition, we present the first NC algorithm for the following job scheduling problem: Given a set of n jobs defined by release times, deadlines and unit processing times, determine if there is a schedule of jobs on one machine, and calculate the schedule if it exists. We identify the notion of a canonical schedule, which is the type of schedule our algorithm computes if there is a schedule. Our algorithm runs in O((log n){sup 2}) time and uses O(n{sup 2}k{sup 2}) processors, where k is the minimum number of distinct offsets of release times or deadlines.

  13. CUDT: A CUDA Based Decision Tree Algorithm

    PubMed Central

    Sheu, Ruey-Kai; Chiu, Chun-Chieh

    2014-01-01

    Decision tree is one of the famous classification methods in data mining. Many researches have been proposed, which were focusing on improving the performance of decision tree. However, those algorithms are developed and run on traditional distributed systems. Obviously the latency could not be improved while processing huge data generated by ubiquitous sensing node in the era without new technology help. In order to improve data processing latency in huge data mining, in this paper, we design and implement a new parallelized decision tree algorithm on a CUDA (compute unified device architecture), which is a GPGPU solution provided by NVIDIA. In the proposed system, CPU is responsible for flow control while the GPU is responsible for computation. We have conducted many experiments to evaluate system performance of CUDT and made a comparison with traditional CPU version. The results show that CUDT is 5∼55 times faster than Weka-j48 and is 18 times speedup than SPRINT for large data set. PMID:25140346

  14. UWB Tracking System Design with TDOA Algorithm

    NASA Technical Reports Server (NTRS)

    Ni, Jianjun; Arndt, Dickey; Ngo, Phong; Phan, Chau; Gross, Julia; Dusl, John; Schwing, Alan

    2006-01-01

    This presentation discusses an ultra-wideband (UWB) tracking system design effort using a tracking algorithm TDOA (Time Difference of Arrival). UWB technology is exploited to implement the tracking system due to its properties, such as high data rate, fine time resolution, and low power spectral density. A system design using commercially available UWB products is proposed. A two-stage weighted least square method is chosen to solve the TDOA non-linear equations. Matlab simulations in both two-dimensional space and three-dimensional space show that the tracking algorithm can achieve fine tracking resolution with low noise TDOA data. The error analysis reveals various ways to improve the tracking resolution. Lab experiments demonstrate the UWBTDOA tracking capability with fine resolution. This research effort is motivated by a prototype development project Mini-AERCam (Autonomous Extra-vehicular Robotic Camera), a free-flying video camera system under development at NASA Johnson Space Center for aid in surveillance around the International Space Station (ISS).

  15. An Efficient Globally Optimal Algorithm for Asymmetric Point Matching.

    PubMed

    Lian, Wei; Zhang, Lei; Yang, Ming-Hsuan

    2016-08-29

    Although the robust point matching algorithm has been demonstrated to be effective for non-rigid registration, there are several issues with the adopted deterministic annealing optimization technique. First, it is not globally optimal and regularization on the spatial transformation is needed for good matching results. Second, it tends to align the mass centers of two point sets. To address these issues, we propose a globally optimal algorithm for the robust point matching problem where each model point has a counterpart in scene set. By eliminating the transformation variables, we show that the original matching problem is reduced to a concave quadratic assignment problem where the objective function has a low rank Hessian matrix. This facilitates the use of large scale global optimization techniques. We propose a branch-and-bound algorithm based on rectangular subdivision where in each iteration, multiple rectangles are used to increase the chances of subdividing the one containing the global optimal solution. In addition, we present an efficient lower bounding scheme which has a linear assignment formulation and can be efficiently solved. Extensive experiments on synthetic and real datasets demonstrate the proposed algorithm performs favorably against the state-of-the-art methods in terms of robustness to outliers, matching accuracy, and run-time.

  16. Multi-jagged: A scalable parallel spatial partitioning algorithm

    DOE PAGES

    Deveci, Mehmet; Rajamanickam, Sivasankaran; Devine, Karen D.; ...

    2015-03-18

    Geometric partitioning is fast and effective for load-balancing dynamic applications, particularly those requiring geometric locality of data (particle methods, crash simulations). We present, to our knowledge, the first parallel implementation of a multidimensional-jagged geometric partitioner. In contrast to the traditional recursive coordinate bisection algorithm (RCB), which recursively bisects subdomains perpendicular to their longest dimension until the desired number of parts is obtained, our algorithm does recursive multi-section with a given number of parts in each dimension. By computing multiple cut lines concurrently and intelligently deciding when to migrate data while computing the partition, we minimize data movement compared to efficientmore » implementations of recursive bisection. We demonstrate the algorithm's scalability and quality relative to the RCB implementation in Zoltan on both real and synthetic datasets. Our experiments show that the proposed algorithm performs and scales better than RCB in terms of run-time without degrading the load balance. Lastly, our implementation partitions 24 billion points into 65,536 parts within a few seconds and exhibits near perfect weak scaling up to 6K cores.« less

  17. Multi-jagged: A scalable parallel spatial partitioning algorithm

    SciTech Connect

    Deveci, Mehmet; Rajamanickam, Sivasankaran; Devine, Karen D.; Catalyurek, Umit V.

    2015-03-18

    Geometric partitioning is fast and effective for load-balancing dynamic applications, particularly those requiring geometric locality of data (particle methods, crash simulations). We present, to our knowledge, the first parallel implementation of a multidimensional-jagged geometric partitioner. In contrast to the traditional recursive coordinate bisection algorithm (RCB), which recursively bisects subdomains perpendicular to their longest dimension until the desired number of parts is obtained, our algorithm does recursive multi-section with a given number of parts in each dimension. By computing multiple cut lines concurrently and intelligently deciding when to migrate data while computing the partition, we minimize data movement compared to efficient implementations of recursive bisection. We demonstrate the algorithm's scalability and quality relative to the RCB implementation in Zoltan on both real and synthetic datasets. Our experiments show that the proposed algorithm performs and scales better than RCB in terms of run-time without degrading the load balance. Lastly, our implementation partitions 24 billion points into 65,536 parts within a few seconds and exhibits near perfect weak scaling up to 6K cores.

  18. DC Algorithm for Extended Robust Support Vector Machine.

    PubMed

    Fujiwara, Shuhei; Takeda, Akiko; Kanamori, Takafumi

    2017-03-23

    Nonconvex variants of support vector machines (SVMs) have been developed for various purposes. For example, robust SVMs attain robustness to outliers by using a nonconvex loss function, while extended [Formula: see text]-SVM (E[Formula: see text]-SVM) extends the range of the hyperparameter by introducing a nonconvex constraint. Here, we consider an extended robust support vector machine (ER-SVM), a robust variant of E[Formula: see text]-SVM. ER-SVM combines two types of nonconvexity from robust SVMs and E[Formula: see text]-SVM. Because of the two nonconvexities, the existing algorithm we proposed needs to be divided into two parts depending on whether the hyperparameter value is in the extended range or not. The algorithm also heuristically solves the nonconvex problem in the extended range. In this letter, we propose a new, efficient algorithm for ER-SVM. The algorithm deals with two types of nonconvexity while never entailing more computations than either E[Formula: see text]-SVM or robust SVM, and it finds a critical point of ER-SVM. Furthermore, we show that ER-SVM includes the existing robust SVMs as special cases. Numerical experiments confirm the effectiveness of integrating the two nonconvexities.

  19. A quantum algorithm for Viterbi decoding of classical convolutional codes

    NASA Astrophysics Data System (ADS)

    Grice, Jon R.; Meyer, David A.

    2015-07-01

    We present a quantum Viterbi algorithm (QVA) with better than classical performance under certain conditions. In this paper, the proposed algorithm is applied to decoding classical convolutional codes, for instance, large constraint length and short decode frames . Other applications of the classical Viterbi algorithm where is large (e.g., speech processing) could experience significant speedup with the QVA. The QVA exploits the fact that the decoding trellis is similar to the butterfly diagram of the fast Fourier transform, with its corresponding fast quantum algorithm. The tensor-product structure of the butterfly diagram corresponds to a quantum superposition that we show can be efficiently prepared. The quantum speedup is possible because the performance of the QVA depends on the fanout (number of possible transitions from any given state in the hidden Markov model) which is in general much less than . The QVA constructs a superposition of states which correspond to all legal paths through the decoding lattice, with phase as a function of the probability of the path being taken given received data. A specialized amplitude amplification procedure is applied one or more times to recover a superposition where the most probable path has a high probability of being measured.

  20. An incremental high-utility mining algorithm with transaction insertion.

    PubMed

    Lin, Jerry Chun-Wei; Gan, Wensheng; Hong, Tzung-Pei; Zhang, Binbin

    2015-01-01

    Association-rule mining is commonly used to discover useful and meaningful patterns from a very large database. It only considers the occurrence frequencies of items to reveal the relationships among itemsets. Traditional association-rule mining is, however, not suitable in real-world applications since the purchased items from a customer may have various factors, such as profit or quantity. High-utility mining was designed to solve the limitations of association-rule mining by considering both the quantity and profit measures. Most algorithms of high-utility mining are designed to handle the static database. Fewer researches handle the dynamic high-utility mining with transaction insertion, thus requiring the computations of database rescan and combination explosion of pattern-growth mechanism. In this paper, an efficient incremental algorithm with transaction insertion is designed to reduce computations without candidate generation based on the utility-list structures. The enumeration tree and the relationships between 2-itemsets are also adopted in the proposed algorithm to speed up the computations. Several experiments are conducted to show the performance of the proposed algorithm in terms of runtime, memory consumption, and number of generated patterns.

  1. QPSO-based adaptive DNA computing algorithm.

    PubMed

    Karakose, Mehmet; Cigdem, Ugur

    2013-01-01

    DNA (deoxyribonucleic acid) computing that is a new computation model based on DNA molecules for information storage has been increasingly used for optimization and data analysis in recent years. However, DNA computing algorithm has some limitations in terms of convergence speed, adaptability, and effectiveness. In this paper, a new approach for improvement of DNA computing is proposed. This new approach aims to perform DNA computing algorithm with adaptive parameters towards the desired goal using quantum-behaved particle swarm optimization (QPSO). Some contributions provided by the proposed QPSO based on adaptive DNA computing algorithm are as follows: (1) parameters of population size, crossover rate, maximum number of operations, enzyme and virus mutation rate, and fitness function of DNA computing algorithm are simultaneously tuned for adaptive process, (2) adaptive algorithm is performed using QPSO algorithm for goal-driven progress, faster operation, and flexibility in data, and (3) numerical realization of DNA computing algorithm with proposed approach is implemented in system identification. Two experiments with different systems were carried out to evaluate the performance of the proposed approach with comparative results. Experimental results obtained with Matlab and FPGA demonstrate ability to provide effective optimization, considerable convergence speed, and high accuracy according to DNA computing algorithm.

  2. Simulated annealing algorithm applied in adaptive near field beam shaping

    NASA Astrophysics Data System (ADS)

    Yu, Zhan; Ma, Hao-tong; Du, Shao-jun

    2010-11-01

    Laser beam shaping is required in many applications for improving the efficiency of the laser systems. In this paper, the near field beam shaping based on the combination of simulated annealing algorithm and Zernike polynomials is demonstrated. Considering phase distribution can be represented by the expansion of Zernike polynomials, the problem of searching appropriate phase distribution can be changed into a problem of optimizing a vector made up of Zernike coefficients. The feasibility of this method is validated theoretically by translating the Gaussian beam into square quasi-flattop beam in the near field. Finally, the closed control loop system constituted by phase only liquid crystal spatial light modulator and simulated annealing algorithm is used to prove the validity of the technique. The experiment results show that the system can generate laser beam with desired intensity distributions.

  3. A novel dynamical community detection algorithm based on weighting scheme

    NASA Astrophysics Data System (ADS)

    Li, Ju; Yu, Kai; Hu, Ke

    2015-12-01

    Network dynamics plays an important role in analyzing the correlation between the function properties and the topological structure. In this paper, we propose a novel dynamical iteration (DI) algorithm, which incorporates the iterative process of membership vector with weighting scheme, i.e. weighting W and tightness T. These new elements can be used to adjust the link strength and the node compactness for improving the speed and accuracy of community structure detection. To estimate the optimal stop time of iteration, we utilize a new stability measure which is defined as the Markov random walk auto-covariance. We do not need to specify the number of communities in advance. It naturally supports the overlapping communities by associating each node with a membership vector describing the node's involvement in each community. Theoretical analysis and experiments show that the algorithm can uncover communities effectively and efficiently.

  4. Iris Segmentation and Normalization Algorithm Based on Zigzag Collarette

    NASA Astrophysics Data System (ADS)

    Rizky Faundra, M.; Ratna Sulistyaningrum, Dwi

    2017-01-01

    In this paper, we proposed iris segmentation and normalization algorithm based on the zigzag collarette. First of all, iris images are processed by using Canny Edge Detection to detect pupil edge, then finding the center and the radius of the pupil with the Hough Transform Circle. Next, isolate important part in iris based zigzag collarette area. Finally, Daugman Rubber Sheet Model applied to get the fixed dimensions or normalization iris by transforming cartesian into polar format and thresholding technique to remove eyelid and eyelash. This experiment will be conducted with a grayscale eye image data taken from a database of iris-Chinese Academy of Sciences Institute of Automation (CASIA). Data iris taken is the data reliable and widely used to study the iris biometrics. The result show that specific threshold level is 0.3 have better accuracy than other, so the present algorithm can be used to segmentation and normalization zigzag collarette with accuracy is 98.88%

  5. Algorithm for precision subsample timing between Gaussian-like pulses.

    PubMed

    Lerche, R A; Golick, B P; Holder, J P; Kalantar, D H

    2010-10-01

    Moderately priced oscilloscopes available for the NIF power sensors and target diagnostics have 6 GHz bandwidths at 20-25 Gsamples/s (40 ps sample spacing). Some NIF experiments require cross timing between instruments be determined with accuracy better than 30 ps. A simple analysis algorithm for Gaussian-like pulses such as the 100-ps-wide NIF timing fiducial can achieve single-event cross-timing precision of 1 ps (1/50 of the sample spacing). The midpoint-timing algorithm is presented along with simulations that show why the technique produces good timing results. Optimum pulse width is found to be ∼2.5 times the sample spacing. Experimental measurements demonstrate use of the technique and highlight the conditions needed to obtain optimum timing performance.

  6. A hierarchical algorithm for molecular similarity (H-FORMS).

    PubMed

    Ramirez-Manzanares, Alonso; Peña, Joaquin; Azpiroz, Jon M; Merino, Gabriel

    2015-07-15

    A new hierarchical method to determine molecular similarity is introduced. The goal of this method is to detect if a pair of molecules has the same structure by estimating a rigid transformation that aligns the molecules and a correspondence function that matches their atoms. The algorithm firstly detect similarity based on the global spatial structure. If this analysis is not sufficient, the algorithm computes novel local structural rotation-invariant descriptors for the atom neighborhood and uses this information to match atoms. Two strategies (deterministic and stochastic) on the matching based alignment computation are tested. As a result, the atom-matching based on local similarity indexes decreases the number of testing trials and significantly reduces the dimensionality of the Hungarian assignation problem. The experiments on well-known datasets show that our proposal outperforms state-of-the-art methods in terms of the required computational time and accuracy.

  7. Innovations in Lattice QCD Algorithms

    SciTech Connect

    Konstantinos Orginos

    2006-06-25

    Lattice QCD calculations demand a substantial amount of computing power in order to achieve the high precision results needed to better understand the nature of strong interactions, assist experiment to discover new physics, and predict the behavior of a diverse set of physical systems ranging from the proton itself to astrophysical objects such as neutron stars. However, computer power alone is clearly not enough to tackle the calculations we need to be doing today. A steady stream of recent algorithmic developments has made an important impact on the kinds of calculations we can currently perform. In this talk I am reviewing these algorithms and their impact on the nature of lattice QCD calculations performed today.

  8. Two-stage hybrid feature selection algorithms for diagnosing erythemato-squamous diseases.

    PubMed

    Xie, Juanying; Lei, Jinhu; Xie, Weixin; Shi, Yong; Liu, Xiaohui

    2013-01-01

    This paper proposes two-stage hybrid feature selection algorithms to build the stable and efficient diagnostic models where a new accuracy measure is introduced to assess the models. The two-stage hybrid algorithms adopt Support Vector Machines (SVM) as a classification tool, and the extended Sequential Forward Search (SFS), Sequential Forward Floating Search (SFFS), and Sequential Backward Floating Search (SBFS), respectively, as search strategies, and the generalized F-score (GF) to evaluate the importance of each feature. The new accuracy measure is used as the criterion to evaluated the performance of a temporary SVM to direct the feature selection algorithms. These hybrid methods combine the advantages of filters and wrappers to select the optimal feature subset from the original feature set to build the stable and efficient classifiers. To get the stable, statistical and optimal classifiers, we conduct 10-fold cross validation experiments in the first stage; then we merge the 10 selected feature subsets of the 10-cross validation experiments, respectively, as the new full feature set to do feature selection in the second stage for each algorithm. We repeat the each hybrid feature selection algorithm in the second stage on the one fold that has got the best result in the first stage. Experimental results show that our proposed two-stage hybrid feature selection algorithms can construct efficient diagnostic models which have got better accuracy than that built by the corresponding hybrid feature selection algorithms without the second stage feature selection procedures. Furthermore our methods have got better classification accuracy when compared with the available algorithms for diagnosing erythemato-squamous diseases.

  9. Improved K-means clustering algorithm for exploring local protein sequence motifs representing common structural property.

    PubMed

    Zhong, Wei; Altun, Gulsah; Harrison, Robert; Tai, Phang C; Pan, Yi

    2005-09-01

    Information about local protein sequence motifs is very important to the analysis of biologically significant conserved regions of protein sequences. These conserved regions can potentially determine the diverse conformation and activities of proteins. In this work, recurring sequence motifs of proteins are explored with an improved K-means clustering algorithm on a new dataset. The structural similarity of these recurring sequence clusters to produce sequence motifs is studied in order to evaluate the relationship between sequence motifs and their structures. To the best of our knowledge, the dataset used by our research is the most updated dataset among similar studies for sequence motifs. A new greedy initialization method for the K-means algorithm is proposed to improve traditional K-means clustering techniques. The new initialization method tries to choose suitable initial points, which are well separated and have the potential to form high-quality clusters. Our experiments indicate that the improved K-means algorithm satisfactorily increases the percentage of sequence segments belonging to clusters with high structural similarity. Careful comparison of sequence motifs obtained by the improved and traditional algorithms also suggests that the improved K-means clustering algorithm may discover some relatively weak and subtle sequence motifs, which are undetectable by the traditional K-means algorithms. Many biochemical tests reported in the literature show that these sequence motifs are biologically meaningful. Experimental results also indicate that the improved K-means algorithm generates more detailed sequence motifs representing common structures than previous research. Furthermore, these motifs are universally conserved sequence patterns across protein families, overcoming some weak points of other popular sequence motifs. The satisfactory result of the experiment suggests that this new K-means algorithm may be applied to other areas of bioinformatics

  10. GOES-West Shows U.S. West's Record Rainfall

    NASA Video Gallery

    A new time-lapse animation of data from NOAA's GOES-West satellite provides a good picture of why the U.S. West Coast continues to experience record rainfall. The new animation shows the movement o...

  11. Ouroboros: A Tool for Building Generic, Hybrid, Divide& Conquer Algorithms

    SciTech Connect

    Johnson, J R; Foster, I

    2003-05-01

    A hybrid divide and conquer algorithm is one that switches from a divide and conquer to an iterative strategy at a specified problem size. Such algorithms can provide significant performance improvements relative to alternatives that use a single strategy. However, the identification of the optimal problem size at which to switch for a particular algorithm and platform can be challenging. We describe an automated approach to this problem that first conducts experiments to explore the performance space on a particular platform and then uses the resulting performance data to construct an optimal hybrid algorithm on that platform. We implement this technique in a tool, ''Ouroboros'', that automatically constructs a high-performance hybrid algorithm from a set of registered algorithms. We present results obtained with this tool for several classical divide and conquer algorithms, including matrix multiply and sorting, and report speedups of up to six times achieved over non-hybrid algorithms.

  12. A new frame-based registration algorithm

    NASA Technical Reports Server (NTRS)

    Yan, C. H.; Whalen, R. T.; Beaupre, G. S.; Sumanaweera, T. S.; Yen, S. Y.; Napel, S.

    1998-01-01

    This paper presents a new algorithm for frame registration. Our algorithm requires only that the frame be comprised of straight rods, as opposed to the N structures or an accurate frame model required by existing algorithms. The algorithm utilizes the full 3D information in the frame as well as a least squares weighting scheme to achieve highly accurate registration. We use simulated CT data to assess the accuracy of our algorithm. We compare the performance of the proposed algorithm to two commonly used algorithms. Simulation results show that the proposed algorithm is comparable to the best existing techniques with knowledge of the exact mathematical frame model. For CT data corrupted with an unknown in-plane rotation or translation, the proposed technique is also comparable to the best existing techniques. However, in situations where there is a discrepancy of more than 2 mm (0.7% of the frame dimension) between the frame and the mathematical model, the proposed technique is significantly better (p < or = 0.05) than the existing techniques. The proposed algorithm can be applied to any existing frame without modification. It provides better registration accuracy and is robust against model mis-match. It allows greater flexibility on the frame structure. Lastly, it reduces the frame construction cost as adherence to a concise model is not required.

  13. Quantum-inspired immune clonal algorithm for global optimization.

    PubMed

    Jiao, Licheng; Li, Yangyang; Gong, Maoguo; Zhang, Xiangrong

    2008-10-01

    Based on the concepts and principles of quantum computing, a novel immune clonal algorithm, called a quantum-inspired immune clonal algorithm (QICA), is proposed to deal with the problem of global optimization. In QICA, the antibody is proliferated and divided into a set of subpopulation groups. The antibodies in a subpopulation group are represented by multistate gene quantum bits. In the antibody's updating, the general quantum rotation gate strategy and the dynamic adjusting angle mechanism are applied to accelerate convergence. The quantum not gate is used to realize quantum mutation to avoid premature convergences. The proposed quantum recombination realizes the information communication between subpopulation groups to improve the search efficiency. Theoretical analysis proves that QICA converges to the global optimum. In the first part of the experiments, 10 unconstrained and 13 constrained benchmark functions are used to test the performance of QICA. The results show that QICA performs much better than the other improved genetic algorithms in terms of the quality of solution and computational cost. In the second part of the experiments, QICA is applied to a practical problem (i.e., multiuser detection in direct-sequence code-division multiple-access systems) with a satisfying result.

  14. Algorithmic methods in diffraction microscopy

    NASA Astrophysics Data System (ADS)

    Thibault, Pierre

    Recent diffraction imaging techniques use properties of coherent sources (most notably x-rays and electrons) to transfer a portion of the imaging task to computer algorithms. "Diffraction microscopy" is a method which consists in reconstructing the image of a specimen from its diffraction pattern. Because only the amplitude of a wavefield incident on a detector is measured, reconstruction of the image entails to recovering the lost phases. This extension of the 'phase problem" commonly met in crystallography is solved only if additional information is available. The main topic of this thesis is the development of algorithmic techniques in diffraction microscopy. In addition to introducing new methods, it is meant to be a review of the algorithmic aspects of the field of diffractive imaging. An overview of the scattering approximations used in the interpretation of diffraction datasets is first given, as well as a numerical propagation tool useful in conditions where known approximations fail. Concepts central to diffraction microscopy---such as oversampling---are then introduced and other similar imaging techniques described. A complete description of iterative reconstruction algorithms follows, with a special emphasis on the difference map, the algorithm used in this thesis. The formalism, based on constraint sets and projection onto these sets, is then defined and explained. Simple projections commonly used in diffraction imaging are then described. The various ways experimental realities can affect reconstruction methods will then be enumerated. Among the diverse sources of algorithmic difficulties, one finds that noise, missing data and partial coherence are typically the most important. Other related difficulties discussed are the detrimental effects of crystalline domains in a specimen, and the convergence problems occurring when the support of a complex-valued specimen is not well known. The last part of this thesis presents reconstruction results; an

  15. An Upperbound to the Performance of Ranked-Output Searching: Optimal Weighting of Query Terms Using A Genetic Algorithm.

    ERIC Educational Resources Information Center

    Robertson, Alexander M.; Willett, Peter

    1996-01-01

    Describes a genetic algorithm (GA) that assigns weights to query terms in a ranked-output document retrieval system. Experiments showed the GA often found weights slightly superior to those produced by deterministic weighting (F4). Many times, however, the two methods gave the same results and sometimes the F4 results were superior, indicating…

  16. Calibration of a polarization navigation sensor using the NSGA-II algorithm

    NASA Astrophysics Data System (ADS)

    Ma, Tao; Hu, Xiaoping; Zhang, Lilian; He, Xiaofeng

    2016-10-01

    A bio-inspired polarization navigation sensor is designed based on the polarization sensitivity mechanisms of insects. A new calibration model by formulating the calibration problem as a multi-objective optimization problem is presented. Unlike existing calibration models, the proposed model makes the calibration problem well-posed. The calibration parameters are optimized through Non-dominated Sorting Genetic Algorithm-II (NSGA-II) approach to minimize both angle of polarization (AOP) residuals and degree of linear polarization (DOLP) dispersions. The results of simulation and experiments show that the proposed algorithm is more stable than the compared methods for the calibration applications of polarization navigation sensors.

  17. Parallel algorithm for computation of second-order sequential best rotations

    NASA Astrophysics Data System (ADS)

    Redif, Soydan; Kasap, Server

    2013-12-01

    Algorithms for computing an approximate polynomial matrix eigenvalue decomposition of para-Hermitian systems have emerged as a powerful, generic signal processing tool. A technique that has shown much success in this regard is the sequential best rotation (SBR2) algorithm. Proposed is a scheme for parallelising SBR2 with a view to exploiting the modern architectural features and inherent parallelism of field-programmable gate array (FPGA) technology. Experiments show that the proposed scheme can achieve low execution times while requiring minimal FPGA resources.

  18. Genetic Algorithm and Tabu Search for Vehicle Routing Problems with Stochastic Demand

    NASA Astrophysics Data System (ADS)

    Ismail, Zuhaimy; Irhamah

    2010-11-01

    This paper presents a problem of designing solid waste collection routes, involving scheduling of vehicles where each vehicle begins at the depot, visits customers and ends at the depot. It is modeled as a Vehicle Routing Problem with Stochastic Demands (VRPSD). A data set from a real world problem (a case) is used in this research. We developed Genetic Algorithm (GA) and Tabu Search (TS) procedure and these has produced the best possible result. The problem data are inspired by real case of VRPSD in waste collection. Results from the experiment show the advantages of the proposed algorithm that are its robustness and better solution qualities.

  19. License plate detection algorithm

    NASA Astrophysics Data System (ADS)

    Broitman, Michael; Klopovsky, Yuri; Silinskis, Normunds

    2013-12-01

    A novel algorithm for vehicle license plates localization is proposed. The algorithm is based on pixel intensity transition gradient analysis. Near to 2500 natural-scene gray-level vehicle images of different backgrounds and ambient illumination was tested. The best set of algorithm's parameters produces detection rate up to 0.94. Taking into account abnormal camera location during our tests and therefore geometrical distortion and troubles from trees this result could be considered as passable. Correlation between source data, such as license Plate dimensions and texture, cameras location and others, and parameters of algorithm were also defined.

  20. Distributed Minimum Hop Algorithms

    DTIC Science & Technology

    1982-01-01

    acknowledgement), node d starts iteration i+1, and otherwise the algorithm terminates. A detailed description of the algorithm is given in pidgin algol...precise behavior of the algorithm under these circumstances is described by the pidgin algol program in the appendix which is executed by each node. The...l) < N!(2) for each neighbor j, and thus by induction,J -1 N!(2-1) < n-i + (Z-1) + N!(Z-1), completing the proof. Algorithm Dl in Pidgin Algol It is

  1. The infrared moving object detection and security detection related algorithms based on W4 and frame difference

    NASA Astrophysics Data System (ADS)

    Yin, Jiale; Liu, Lei; Li, He; Liu, Qiankun

    2016-07-01

    This paper presents the infrared moving object detection and security detection related algorithms in video surveillance based on the classical W4 and frame difference algorithm. Classical W4 algorithm is one of the powerful background subtraction algorithms applying to infrared images which can accurately, integrally and quickly detect moving object. However, the classical W4 algorithm can only overcome the deficiency in the slight movement of background. The error will become bigger and bigger for long-term surveillance system since the background model is unchanged once established. In this paper, we present the detection algorithm based on the classical W4 and frame difference. It cannot only overcome the shortcoming of falsely detecting because of state mutations from background, but also eliminate holes caused by frame difference. Based on these we further design various security detection related algorithms such as illegal intrusion alarm, illegal persistence alarm and illegal displacement alarm. We compare our method with the classical W4, frame difference, and other state-of-the-art methods. Experiments detailed in this paper show the method proposed in this paper outperforms the classical W4 and frame difference and serves well for the security detection related algorithms.

  2. Basis for spectral curvature algorithms in remote sensing of chlorophyll

    NASA Technical Reports Server (NTRS)

    Campbell, J. W.; Esaias, W. E.

    1983-01-01

    A simple, empirically derived algorithm for estimating oceanic chlorophyll concentrations from spectral radiances measured by a low-flying spectroradiometer has proved highly successful in field experiments in 1980-82. The sensor used was the Multichannel Ocean Color Sensor, and the originator of the algorithm was Grew (1981). This paper presents an explanation for the algorithm based on the optical properties of waters containing chlorophyll and other phytoplankton pigments and the radiative transfer equations governing the remotely sensed signal. The effects of varying solar zenith, atmospheric transmittance, and interfering substances in the water on the chlorophyll algorithm are characterized, and applicability of the algorithm is discussed.

  3. Negative Selection Algorithm for Aircraft Fault Detection

    NASA Technical Reports Server (NTRS)

    Dasgupta, D.; KrishnaKumar, K.; Wong, D.; Berry, M.

    2004-01-01

    We investigated a real-valued Negative Selection Algorithm (NSA) for fault detection in man-in-the-loop aircraft operation. The detection algorithm uses body-axes angular rate sensory data exhibiting the normal flight behavior patterns, to generate probabilistically a set of fault detectors that can detect any abnormalities (including faults and damages) in the behavior pattern of the aircraft flight. We performed experiments with datasets (collected under normal and various simulated failure conditions) using the NASA Ames man-in-the-loop high-fidelity C-17 flight simulator. The paper provides results of experiments with different datasets representing various failure conditions.

  4. Genetic Algorithm for Initial Orbit Determination with Too Short Arc

    NASA Astrophysics Data System (ADS)

    Xin-ran, Li; Xin, Wang

    2017-01-01

    A huge quantity of too-short-arc (TSA) observational data have been obtained in sky surveys of space objects. However, reasonable results for the TSAs can hardly be obtained with the classical methods of initial orbit determination (IOD). In this paper, the IOD is reduced to a two-stage hierarchical optimization problem containing three variables for each stage. Using the genetic algorithm, a new method of the IOD for TSAs is established, through the selections of the optimized variables and the corresponding genetic operators for specific problems. Numerical experiments based on the real measurements show that the method can provide valid initial values for the follow-up work.

  5. Applications and accuracy of the parallel diagonal dominant algorithm

    NASA Technical Reports Server (NTRS)

    Sun, Xian-He

    1993-01-01

    The Parallel Diagonal Dominant (PDD) algorithm is a highly efficient, ideally scalable tridiagonal solver. In this paper, a detailed study of the PDD algorithm is given. First the PDD algorithm is introduced. Then the algorithm is extended to solve periodic tridiagonal systems. A variant, the reduced PDD algorithm, is also proposed. Accuracy analysis is provided for a class of tridiagonal systems, the symmetric, and anti-symmetric Toeplitz tridiagonal systems. Implementation results show that the analysis gives a good bound on the relative error, and the algorithm is a good candidate for the emerging massively parallel machines.

  6. An improved conscan algorithm based on a Kalman filter

    NASA Technical Reports Server (NTRS)

    Eldred, D. B.

    1994-01-01

    Conscan is commonly used by DSN antennas to allow adaptive tracking of a target whose position is not precisely known. This article describes an algorithm that is based on a Kalman filter and is proposed to replace the existing fast Fourier transform based (FFT-based) algorithm for conscan. Advantages of this algorithm include better pointing accuracy, continuous update information, and accommodation of missing data. Additionally, a strategy for adaptive selection of the conscan radius is proposed. The performance of the algorithm is illustrated through computer simulations and compared to the FFT algorithm. The results show that the Kalman filter algorithm is consistently superior.

  7. A parallel Jacobson-Oksman optimization algorithm. [parallel processing (computers)

    NASA Technical Reports Server (NTRS)

    Straeter, T. A.; Markos, A. T.

    1975-01-01

    A gradient-dependent optimization technique which exploits the vector-streaming or parallel-computing capabilities of some modern computers is presented. The algorithm, derived by assuming that the function to be minimized is homogeneous, is a modification of the Jacobson-Oksman serial minimization method. In addition to describing the algorithm, conditions insuring the convergence of the iterates of the algorithm and the results of numerical experiments on a group of sample test functions are presented. The results of these experiments indicate that this algorithm will solve optimization problems in less computing time than conventional serial methods on machines having vector-streaming or parallel-computing capabilities.

  8. Dolphin shows and interaction programs: benefits for conservation education?

    PubMed

    Miller, L J; Zeigler-Hill, V; Mellen, J; Koeppel, J; Greer, T; Kuczaj, S

    2013-01-01

    Dolphin shows and dolphin interaction programs are two types of education programs within zoological institutions used to educate visitors about dolphins and the marine environment. The current study examined the short- and long-term effects of these programs on visitors' conservation-related knowledge, attitude, and behavior. Participants of both dolphin shows and interaction programs demonstrated a significant short-term increase in knowledge, attitudes, and behavioral intentions. Three months following the experience, participants of both dolphin shows and interaction programs retained the knowledge learned during their experience and reported engaging in more conservation-related behaviors. Additionally, the number of dolphin shows attended in the past was a significant predictor of recent conservation-related behavior suggesting that repetition of these types of experiences may be important in inspiring people to conservation action. These results suggest that both dolphin shows and dolphin interaction programs can be an important part of a conservation education program for visitors of zoological facilities.

  9. Breast Cancer Screening in the Era of Density Notification Legislation: Summary of 2014 Massachusetts Experience and Suggestion of An Evidence-Based Management Algorithm by Multi-disciplinary Expert Panel

    PubMed Central

    Freer, Phoebe E.; Slanetz, Priscilla J.; Haas, Jennifer S.; Tung, Nadine M.; Hughes, Kevin S.; Armstrong, Katrina; Semine, A. Alan; Troyan, Susan L.; Birdwell, Robyn L.

    2015-01-01

    Purpose Stemming from breast density notification legislation in Massachusetts effective 2015, we sought to develop a collaborative evidence-based approach to density notification that could be used by practitioners across the state. Our goal was to develop an evidence-based consensus management algorithm to help patients and health care providers follow best practices to implement a coordinated, evidence-based, cost-effective, sustainable practice and to standardize care in recommendations for supplemental screening. Methods We formed the Massachusetts Breast Risk Education and Assessment Task Force (MA-BREAST) a multi-institutional, multi-disciplinary panel of expert radiologists, surgeons, primary care physicians, and oncologists to develop a collaborative approach to density notification legislation. Using evidence-based data from the Institute for Clinical and Economic Review (ICER), the Cochrane review, National Comprehensive Cancer Network (NCCN) guidelines, American Cancer Society (ACS) recommendations, and American College of Radiology (ACR) appropriateness criteria, the group collaboratively developed an evidence-based best-practices algorithm. Results The expert consensus algorithm uses breast density as one element in the risk stratification to determine the need for supplemental screening. Women with dense breasts and otherwise low risk (<15% lifetime risk), do not routinely require supplemental screening per the expert consensus. Women of high risk (>20% lifetime) should consider supplemental screening MRI in addition to routine mammography regardless of breast density. Conclusion We report the development of the multi-disciplinary collaborative approach to density notification. We propose a risk stratification algorithm to assess personal level of risk to determine the need for supplemental screening for an individual woman. PMID:26290416

  10. Breast cancer screening in the era of density notification legislation: summary of 2014 Massachusetts experience and suggestion of an evidence-based management algorithm by multi-disciplinary expert panel.

    PubMed

    Freer, Phoebe E; Slanetz, Priscilla J; Haas, Jennifer S; Tung, Nadine M; Hughes, Kevin S; Armstrong, Katrina; Semine, A Alan; Troyan, Susan L; Birdwell, Robyn L

    2015-09-01

    Stemming from breast density notification legislation in Massachusetts effective 2015, we sought to develop a collaborative evidence-based approach to density notification that could be used by practitioners across the state. Our goal was to develop an evidence-based consensus management algorithm to help patients and health care providers follow best practices to implement a coordinated, evidence-based, cost-effective, sustainable practice and to standardize care in recommendations for supplemental screening. We formed the Massachusetts Breast Risk Education and Assessment Task Force (MA-BREAST) a multi-institutional, multi-disciplinary panel of expert radiologists, surgeons, primary care physicians, and oncologists to develop a collaborative approach to density notification legislation. Using evidence-based data from the Institute for Clinical and Economic Review, the Cochrane review, National Comprehensive Cancer Network guidelines, American Cancer Society recommendations, and American College of Radiology appropriateness criteria, the group collaboratively developed an evidence-based best-practices algorithm. The expert consensus algorithm uses breast density as one element in the risk stratification to determine the need for supplemental screening. Women with dense breasts and otherwise low risk (<15% lifetime risk), do not routinely require supplemental screening per the expert consensus. Women of high risk (>20% lifetime) should consider supplemental screening MRI in addition to routine mammography regardless of breast density. We report the development of the multi-disciplinary collaborative approach to density notification. We propose a risk stratification algorithm to assess personal level of risk to determine the need for supplemental screening for an individual woman.

  11. Interpreting Quantifier Scope Ambiguity: Evidence of Heuristic First, Algorithmic Second Processing

    PubMed Central

    Dwivedi, Veena D.

    2013-01-01

    The present work suggests that sentence processing requires both heuristic and algorithmic processing streams, where the heuristic processing strategy precedes the algorithmic phase. This conclusion is based on three self-paced reading experiments in which the processing of two-sentence discourses was investigated, where context sentences exhibited quantifier scope ambiguity. Experiment 1 demonstrates that such sentences are processed in a shallow manner. Experiment 2 uses the same stimuli as Experiment 1 but adds questions to ensure deeper processing. Results indicate that reading times are consistent with a lexical-pragmatic interpretation of number associated with context sentences, but responses to questions are consistent with the algorithmic computation of quantifier scope. Experiment 3 shows the same pattern of results as Experiment 2, despite using stimuli with different lexical-pragmatic biases. These effects suggest that language processing can be superficial, and that deeper processing, which is sensitive to structure, only occurs if required. Implications for recent studies of quantifier scope ambiguity are discussed. PMID:24278439

  12. Algorithm That Synthesizes Other Algorithms for Hashing

    NASA Technical Reports Server (NTRS)

    James, Mark

    2010-01-01

    An algorithm that includes a collection of several subalgorithms has been devised as a means of synthesizing still other algorithms (which could include computer code) that utilize hashing to determine whether an element (typically, a number or other datum) is a member of a set (typically, a list of numbers). Each subalgorithm synthesizes an algorithm (e.g., a block of code) that maps a static set of key hashes to a somewhat linear monotonically increasing sequence of integers. The goal in formulating this mapping is to cause the length of the sequence thus generated to be as close as practicable to the original length of the set and thus to minimize gaps between the elements. The advantage of the approach embodied in this algorithm is that it completely avoids the traditional approach of hash-key look-ups that involve either secondary hash generation and look-up or further searching of a hash table for a desired key in the event of collisions. This algorithm guarantees that it will never be necessary to perform a search or to generate a secondary key in order to determine whether an element is a member of a set. This algorithm further guarantees that any algorithm that it synthesizes can be executed in constant time. To enforce these guarantees, the subalgorithms are formulated to employ a set of techniques, each of which works very effectively covering a certain class of hash-key values. These subalgorithms are of two types, summarized as follows: Given a list of numbers, try to find one or more solutions in which, if each number is shifted to the right by a constant number of bits and then masked with a rotating mask that isolates a set of bits, a unique number is thereby generated. In a variant of the foregoing procedure, omit the masking. Try various combinations of shifting, masking, and/or offsets until the solutions are found. From the set of solutions, select the one that provides the greatest compression for the representation and is executable in the

  13. Transitional Division Algorithms.

    ERIC Educational Resources Information Center

    Laing, Robert A.; Meyer, Ruth Ann

    1982-01-01

    A survey of general mathematics students whose teachers were taking an inservice workshop revealed that they had not yet mastered division. More direct introduction of the standard division algorithm is favored in elementary grades, with instruction of transitional processes curtailed. Weaknesses in transitional algorithms appear to outweigh…

  14. Ultrametric Hierarchical Clustering Algorithms.

    ERIC Educational Resources Information Center

    Milligan, Glenn W.

    1979-01-01

    Johnson has shown that the single linkage and complete linkage hierarchical clustering algorithms induce a metric on the data known as the ultrametric. Johnson's proof is extended to four other common clustering algorithms. Two additional methods also produce hierarchical structures which can violate the ultrametric inequality. (Author/CTM)

  15. The Training Effectiveness Algorithm.

    ERIC Educational Resources Information Center

    Cantor, Jeffrey A.

    1988-01-01

    Describes the Training Effectiveness Algorithm, a systematic procedure for identifying the cause of reported training problems which was developed for use in the U.S. Navy. A two-step review by subject matter experts is explained, and applications of the algorithm to other organizations and training systems are discussed. (Author/LRW)

  16. A new algorithm for the detection of seismic quiescence: introduction of the RTM algorithm, a modified RTL algorithm

    NASA Astrophysics Data System (ADS)

    Nagao, Toshiyasu; Takeuchi, Akihiro; Nakamura, Kenji

    2011-03-01

    There are a number of reports on seismic quiescence phenomena before large earthquakes. The RTL algorithm is a weighted coefficient statistical method that takes into account the magnitude, occurrence time, and place of earthquake when seismicity pattern changes before large earthquakes are being investigated. However, we consider the original RTL algorithm to be overweighted on distance. In this paper, we introduce a modified RTL algorithm, called the RTM algorithm, and apply it to three large earthquakes in Japan, namely, the Hyogo-ken Nanbu earthquake in 1995 ( M JMA7.3), the Noto Hanto earthquake in 2007 ( M JMA 6.9), and the Iwate-Miyagi Nairiku earthquake in 2008 ( M JMA 7.2), as test cases. Because this algorithm uses several parameters to characterize the weighted coefficients, multiparameter sets have to be prepared for the tests. The results show that the RTM algorithm is more sensitive than the RTL algorithm to seismic quiescence phenomena. This paper represents the first step in a series of future analyses of seismic quiescence phenomena using the RTM algorithm. At this moment, whole surveyed parameters are empirically selected for use in the method. We have to consider the physical meaning of the "best fit" parameter, such as the relation of ACFS, among others, in future analyses.

  17. Motion Cueing Algorithm Development: Piloted Performance Testing of the Cueing Algorithms

    NASA Technical Reports Server (NTRS)

    Houck, Jacob A. (Technical Monitor); Telban, Robert J.; Cardullo, Frank M.; Kelly, Lon C.

    2005-01-01

    The relative effectiveness in simulating aircraft maneuvers with both current and newly developed motion cueing algorithms was assessed with an eleven-subject piloted performance evaluation conducted on the NASA Langley Visual Motion Simulator (VMS). In addition to the current NASA adaptive algorithm, two new cueing algorithms were evaluated: the optimal algorithm and the nonlinear algorithm. The test maneuvers included a straight-in approach with a rotating wind vector, an offset approach with severe turbulence and an on/off lateral gust that occurs as the aircraft approaches the runway threshold, and a takeoff both with and without engine failure after liftoff. The maneuvers were executed with each cueing algorithm with added visual display delay conditions ranging from zero to 200 msec. Two methods, the quasi-objective NASA Task Load Index (TLX), and power spectral density analysis of pilot control, were used to assess pilot workload. Piloted performance parameters for the approach maneuvers, the vertical velocity upon touchdown and the runway touchdown position, were also analyzed but did not show any noticeable difference among the cueing algorithms. TLX analysis reveals, in most cases, less workload and variation among pilots with the nonlinear algorithm. Control input analysis shows pilot-induced oscillations on a straight-in approach were less prevalent compared to the optimal algorithm. The augmented turbulence cues increased workload on an offset approach that the pilots deemed more realistic compared to the NASA adaptive algorithm. The takeoff with engine failure showed the least roll activity for the nonlinear algorithm, with the least rudder pedal activity for the optimal algorithm.

  18. Computational selection of transcriptomics experiments improves Guilt-by-Association analyses.

    PubMed

    Bhat, Prajwal; Yang, Haixuan; Bögre, László; Devoto, Alessandra; Paccanaro, Alberto

    2012-01-01

    The Guilt-by-Association (GBA) principle, according to which genes with similar expression profiles are functionally associated, is widely applied for functional analyses using large heterogeneous collections of transcriptomics data. However, the use of such large collections could hamper GBA functional analysis for genes whose expression is condition specific. In these cases a smaller set of condition related experiments should instead be used, but identifying such functionally relevant experiments from large collections based on literature knowledge alone is an impractical task. We begin this paper by analyzing, both from a mathematical and a biological point of view, why only condition specific experiments should be used in GBA functional analysis. We are able to show that this phenomenon is independent of the functional categorization scheme and of the organisms being analyzed. We then present a semi-supervised algorithm that can select functionally relevant experiments from large collections of transcriptomics experiments. Our algorithm is able to select experiments relevant to a given GO term, MIPS FunCat term or even KEGG pathways. We extensively test our algorithm on large dataset collections for yeast and Arabidopsis. We demonstrate that: using the selected experiments there is a statistically significant improvement in correlation between genes in the functional category of interest; the selected experiments improve GBA-based gene function prediction; the effectiveness of the selected experiments increases with annotation specificity; our algorithm can be successfully applied to GBA-based pathway reconstruction. Importantly, the set of experiments selected by the algorithm reflects the existing literature knowledge about the experiments. [A MATLAB implementation of the algorithm and all the data used in this paper can be downloaded from the paper website: http://www.paccanarolab.org/papers/CorrGene/].

  19. Totally parallel multilevel algorithms

    NASA Technical Reports Server (NTRS)

    Frederickson, Paul O.

    1988-01-01

    Four totally parallel algorithms for the solution of a sparse linear system have common characteristics which become quite apparent when they are implemented on a highly parallel hypercube such as the CM2. These four algorithms are Parallel Superconvergent Multigrid (PSMG) of Frederickson and McBryan, Robust Multigrid (RMG) of Hackbusch, the FFT based Spectral Algorithm, and Parallel Cyclic Reduction. In fact, all four can be formulated as particular cases of the same totally parallel multilevel algorithm, which are referred to as TPMA. In certain cases the spectral radius of TPMA is zero, and it is recognized to be a direct algorithm. In many other cases the spectral radius, although not zero, is small enough that a single iteration per timestep keeps the local error within the required tolerance.

  20. Algorithms for Automated DNA Assembly

    DTIC Science & Technology

    2010-01-01

    polyketide synthase gene cluster. Proc. Natl Acad. Sci. USA, 101, 15573–15578. 16. Shetty,R.P., Endy,D. and Knight,T.F. Jr (2008) Engineering BioBrick vectors...correct theoretical construction scheme is de- veloped manually, it is likely to be suboptimal by any number of cost metrics. Modular, robust and...to an exhaustive search on a small synthetic dataset and our results show that our algorithms can quickly find an optimal solution. Comparison with

  1. A Parallel Rendering Algorithm for MIMD Architectures

    NASA Technical Reports Server (NTRS)

    Crockett, Thomas W.; Orloff, Tobias

    1991-01-01

    Applications such as animation and scientific visualization demand high performance rendering of complex three dimensional scenes. To deliver the necessary rendering rates, highly parallel hardware architectures are required. The challenge is then to design algorithms and software which effectively use the hardware parallelism. A rendering algorithm targeted to distributed memory MIMD architectures is described. For maximum performance, the algorithm exploits both object-level and pixel-level parallelism. The behavior of the algorithm is examined both analytically and experimentally. Its performance for large numbers of processors is found to be limited primarily by communication overheads. An experimental implementation for the Intel iPSC/860 shows increasing performance from 1 to 128 processors across a wide range of scene complexities. It is shown that minimal modifications to the algorithm will adapt it for use on shared memory architectures as well.

  2. An efficient quantum algorithm for spectral estimation

    NASA Astrophysics Data System (ADS)

    Steffens, Adrian; Rebentrost, Patrick; Marvian, Iman; Eisert, Jens; Lloyd, Seth

    2017-03-01

    We develop an efficient quantum implementation of an important signal processing algorithm for line spectral estimation: the matrix pencil method, which determines the frequencies and damping factors of signals consisting of finite sums of exponentially damped sinusoids. Our algorithm provides a quantum speedup in a natural regime where the sampling rate is much higher than the number of sinusoid components. Along the way, we develop techniques that are expected to be useful for other quantum algorithms as well—consecutive phase estimations to efficiently make products of asymmetric low rank matrices classically accessible and an alternative method to efficiently exponentiate non-Hermitian matrices. Our algorithm features an efficient quantum–classical division of labor: the time-critical steps are implemented in quantum superposition, while an interjacent step, requiring much fewer parameters, can operate classically. We show that frequencies and damping factors can be obtained in time logarithmic in the number of sampling points, exponentially faster than known classical algorithms.

  3. A Fast parallel tridiagonal algorithm for a class of CFD applications

    NASA Technical Reports Server (NTRS)

    Moitra, Stuti; Sun, Xian-He

    1996-01-01

    The parallel diagonal dominant (PDD) algorithm is an efficient tridiagonal solver. This paper presents for study a variation of the PDD algorithm, the reduced PDD algorithm. The new algorithm maintains the minimum communication provided by the PDD algorithm, but has a reduced operation count. The PDD algorithm also has a smaller operation count than the conventional sequential algorithm for many applications. Accuracy analysis is provided for the reduced PDD algorithm for symmetric Toeplitz tridiagonal (STT) systems. Implementation results on Langley's Intel Paragon and IBM SP2 show that both the PDD and reduced PDD algorithms are efficient and scalable.

  4. Scheduling Earth Observing Satellites with Evolutionary Algorithms

    NASA Technical Reports Server (NTRS)

    Globus, Al; Crawford, James; Lohn, Jason; Pryor, Anna

    2003-01-01

    We hypothesize that evolutionary algorithms can effectively schedule coordinated fleets of Earth observing satellites. The constraints are complex and the bottlenecks are not well understood, a condition where evolutionary algorithms are often effective. This is, in part, because evolutionary algorithms require only that one can represent solutions, modify solutions, and evaluate solution fitness. To test the hypothesis we have developed a representative set of problems, produced optimization software (in Java) to solve them, and run experiments comparing techniques. This paper presents initial results of a comparison of several evolutionary and other optimization techniques; namely the genetic algorithm, simulated annealing, squeaky wheel optimization, and stochastic hill climbing. We also compare separate satellite vs. integrated scheduling of a two satellite constellation. While the results are not definitive, tests to date suggest that simulated annealing is the best search technique and integrated scheduling is superior.

  5. A parallel variable metric optimization algorithm

    NASA Technical Reports Server (NTRS)

    Straeter, T. A.

    1973-01-01

    An algorithm, designed to exploit the parallel computing or vector streaming (pipeline) capabilities of computers is presented. When p is the degree of parallelism, then one cycle of the parallel variable metric algorithm is defined as follows: first, the function and its gradient are computed in parallel at p different values of the independent variable; then the metric is modified by p rank-one corrections; and finally, a single univariant minimization is carried out in the Newton-like direction. Several properties of this algorithm are established. The convergence of the iterates to the solution is proved for a quadratic functional on a real separable Hilbert space. For a finite-dimensional space the convergence is in one cycle when p equals the dimension of the space. Results of numerical experiments indicate that the new algorithm will exploit parallel or pipeline computing capabilities to effect faster convergence than serial techniques.

  6. A Comprehensive Review of Swarm Optimization Algorithms

    PubMed Central

    2015-01-01

    Many swarm optimization algorithms have been introduced since the early 60’s, Evolutionary Programming to the most recent, Grey Wolf Optimization. All of these algorithms have demonstrated their potential to solve many optimization problems. This paper provides an in-depth survey of well-known optimization algorithms. Selected algorithms are briefly explained and compared with each other comprehensively through experiments conducted using thirty well-known benchmark functions. Their advantages and disadvantages are also discussed. A number of statistical tests are then carried out to determine the significant performances. The results indicate the overall advantage of Differential Evolution (DE) and is closely followed by Particle Swarm Optimization (PSO), compared with other considered approaches. PMID:25992655

  7. A comprehensive review of swarm optimization algorithms.

    PubMed

    Ab Wahab, Mohd Nadhir; Nefti-Meziani, Samia; Atyabi, Adham

    2015-01-01

    Many swarm optimization algorithms have been introduced since the early 60's, Evolutionary Programming to the most recent, Grey Wolf Optimization. All of these algorithms have demonstrated their potential to solve many optimization problems. This paper provides an in-depth survey of well-known optimization algorithms. Selected algorithms are briefly explained and compared with each other comprehensively through experiments conducted using thirty well-known benchmark functions. Their advantages and disadvantages are also discussed. A number of statistical tests are then carried out to determine the significant performances. The results indicate the overall advantage of Differential Evolution (DE) and is closely followed by Particle Swarm Optimization (PSO), compared with other considered approaches.

  8. a Fast and Robust Algorithm for Road Edges Extraction from LIDAR Data

    NASA Astrophysics Data System (ADS)

    Qiu, Kaijin; Sun, Kai; Ding, Kou; Shu, Zhen

    2016-06-01

    Fast mapping of roads plays an important role in many geospatial applications, such as infrastructure planning, traffic monitoring, and driver assistance. How to extract various road edges fast and robustly is a challenging task. In this paper, we present a fast and robust algorithm for the automatic road edges extraction from terrestrial mobile LiDAR data. The algorithm is based on a key observation: most roads around edges have difference in elevation and road edges with pavement are seen in two different planes. In our algorithm, we firstly extract a rough plane based on RANSAC algorithm, and then multiple refined planes which only contains pavement are extracted from the rough plane. The road edges are extracted based on these refined planes. In practice, there is a serious problem that the rough and refined planes usually extracted badly due to rough roads and different density of point cloud. To eliminate the influence of rough roads, the technology which is similar with the difference of DSM (digital surface model) and DTM (digital terrain model) is used, and we also propose a method which adjust the point clouds to a similar density to eliminate the influence of different density. Experiments show the validities of the proposed method with multiple datasets (e.g. urban road, highway, and some rural road). We use the same parameters through the experiments and our algorithm can achieve real-time processing speeds.

  9. Distributed Storage Algorithm for Geospatial Image Data Based on Data Access Patterns.

    PubMed

    Pan, Shaoming; Li, Yongkai; Xu, Zhengquan; Chong, Yanwen

    2015-01-01

    Declustering techniques are widely used in distributed environments to reduce query response time through parallel I/O by splitting large files into several small blocks and then distributing those blocks among multiple storage nodes. Unfortunately, however, many small geospatial image data files cannot be further split for distributed storage. In this paper, we propose a complete theoretical system for the distributed storage of small geospatial image data files based on mining the access patterns of geospatial image data using their historical access log information. First, an algorithm is developed to construct an access correlation matrix based on the analysis of the log information, which reveals the patterns of access to the geospatial image data. Then, a practical heuristic algorithm is developed to determine a reasonable solution based on the access correlation matrix. Finally, a number of comparative experiments are presented, demonstrating that our algorithm displays a higher total parallel access probability than those of other algorithms by approximately 10-15% and that the performance can be further improved by more than 20% by simultaneously applying a copy storage strategy. These experiments show that the algorithm can be applied in distributed environments to help realize parallel I/O and thereby improve system performance.

  10. An Efficient Algorithm for Clustering of Large-Scale Mass Spectrometry Data.

    PubMed

    Saeed, Fahad; Pisitkun, Trairak; Knepper, Mark A; Hoffert, Jason D

    2012-10-04

    High-throughput spectrometers are capable of producing data sets containing thousands of spectra for a single biological sample. These data sets contain a substantial amount of redundancy from peptides that may get selected multiple times in a LC-MS/MS experiment. In this paper, we present an efficient algorithm, CAMS (Clustering Algorithm for Mass Spectra) for clustering mass spectrometry data which increases both the sensitivity and confidence of spectral assignment. CAMS utilizes a novel metric, called F-set, that allows accurate identification of the spectra that are similar. A graph theoretic framework is defined that allows the use of F-set metric efficiently for accurate cluster identifications. The accuracy of the algorithm is tested on real HCD and CID data sets with varying amounts of peptides. Our experiments show that the proposed algorithm is able to cluster spectra with very high accuracy in a reasonable amount of time for large spectral data sets. Thus, the algorithm is able to decrease the computational time by compressing the data sets while increasing the throughput of the data by interpreting low S/N spectra.

  11. Adaptive path planning: Algorithm and analysis

    SciTech Connect

    Chen, Pang C.

    1993-03-01

    Path planning has to be fast to support real-time robot programming. Unfortunately, current planning techniques are still too slow to be effective, as they often require several minutes, if not hours of computation. To alleviate this problem, we present a learning algorithm that uses past experience to enhance future performance. The algorithm relies on an existing path planner to provide solutions to difficult tasks. From these solutions, an evolving sparse network of useful subgoals is learned to support faster planning. The algorithm is suitable for both stationary and incrementally-changing environments. To analyze our algorithm, we use a previously developed stochastic model that quantifies experience utility. Using this model, we characterize the situations in which the adaptive planner is useful, and provide quantitative bounds to predict its behavior. The results are demonstrated with problems in manipulator planning. Our algorithm and analysis are sufficiently general that they may also be applied to task planning or other planning domains in which experience is useful.

  12. A multiobjective evolutionary algorithm to find community structures based on affinity propagation

    NASA Astrophysics Data System (ADS)

    Shang, Ronghua; Luo, Shuang; Zhang, Weitong; Stolkin, Rustam; Jiao, Licheng

    2016-07-01

    Community detection plays an important role in reflecting and understanding the topological structure of complex networks, and can be used to help mine the potential information in networks. This paper presents a Multiobjective Evolutionary Algorithm based on Affinity Propagation (APMOEA) which improves the accuracy of community detection. Firstly, APMOEA takes the method of affinity propagation (AP) to initially divide the network. To accelerate its convergence, the multiobjective evolutionary algorithm selects nondominated solutions from the preliminary partitioning results as its initial population. Secondly, the multiobjective evolutionary algorithm finds solutions approximating the true Pareto optimal front through constantly selecting nondominated solutions from the population after crossover and mutation in iterations, which overcomes the tendency of data clustering methods to fall into local optima. Finally, APMOEA uses an elitist strategy, called "external archive", to prevent degeneration during the process of searching using the multiobjective evolutionary algorithm. According to this strategy, the preliminary partitioning results obtained by AP will be archived and participate in the final selection of Pareto-optimal solutions. Experiments on benchmark test data, including both computer-generated networks and eight real-world networks, show that the proposed algorithm achieves more accurate results and has faster convergence speed compared with seven other state-of-art algorithms.

  13. Comparison of eight unwrapping algorithms applied to Fourier-transform profilometry

    NASA Astrophysics Data System (ADS)

    Zappa, E.; Busca, G.

    2008-02-01

    Phase unwrapping is a task common to many applications like interferometry imaging, medical magnetic resonance imaging, solid-state physics, etc. Fourier transform profilometry (FTP) values the height distribution of object, elaborating the interference between a plane reference grating and a deformed object grating. Since the height information is extracted from the phase of a complex function, the phase unwrapping is a critical step of the process. Several unwrapping algorithms are proposed in literature, but applied to measurement technologies different from FTP. The purpose of this paper is to define the performances of eight different unwrapping algorithms applied to FTP optical scan method and to define the best one. The algorithms chosen are: Goldstein's algorithm, quality guided path following method, Mask cut method, Flynn's method, multi-grid method, weighted multi-grid method, preconditioned conjugate gradient method and minimum L p-norm method. The methods were tested on real images acquired by a FTP scanner developed and calibrated for these experiments. The objects used vary from simple geometries, like planes and cylinders, to complex shapes of common use objects. Algorithms were qualified considering the phase unwrapping errors, execution time and accuracy of the shape of objects obtained from the scan method in comparison with real ones. The results show that quality guided algorithm best fits in FTP application.

  14. A hole-filling algorithm based on pixel labeling for DIBR

    NASA Astrophysics Data System (ADS)

    Lei, Liansha; Chen, Zaiqing; Shi, Junsheng

    2014-09-01

    Depth Image Based Rendering (DIBR) technology is one of effective methods to generate stereoscopic image pairs in 3D image warping, however, the holes would be produced when using this method. Hole-filling algorithms are essential for improving image quality of stereoscopic image pairs. In this paper, a new hole-filling algorithm based on pixel labeling is proposed. Firstly, holes in stereoscopic image pairs produced by DIBR are marked as 0, whereas marked as 1. Then traversing the image pairs only once to fill pixel values of each hole according to the situation of hole's eight neighborhood pixels, the hole would be filled by the average of no-hole pixel values when the number of no-holes greater than threshold, otherwise the hole is filled by the cross diamond search algorithm from every direction to find the closest no-holes until the number of no-holes greater than threshold. The proposed method is evaluated by existing objective assessment methods, such as PSNR and SSIM. Experiment results show that the proposed hole-filling algorithm provides an improvement in both of subjective and objective assessment by compared with the conventional hole-filling algorithm under the same source images. The proposed algorithm is not only simple, but also can effectively eliminate the holes generated by using the DIBR method.

  15. New image compression algorithm based on improved reversible biorthogonal integer wavelet transform

    NASA Astrophysics Data System (ADS)

    Zhang, Libao; Yu, Xianchuan

    2012-10-01

    The low computational complexity and high coding efficiency are the most significant requirements for image compression and transmission. Reversible biorthogonal integer wavelet transform (RB-IWT) supports the low computational complexity by lifting scheme (LS) and allows both lossy and lossless decoding using a single bitstream. However, RB-IWT degrades the performances and peak signal noise ratio (PSNR) of the image coding for image compression. In this paper, a new IWT-based compression scheme based on optimal RB-IWT and improved SPECK is presented. In this new algorithm, the scaling parameter of each subband is chosen for optimizing the transform coefficient. During coding, all image coefficients are encoding using simple, efficient quadtree partitioning method. This scheme is similar to the SPECK, but the new method uses a single quadtree partitioning instead of set partitioning and octave band partitioning of original SPECK, which reduces the coding complexity. Experiment results show that the new algorithm not only obtains low computational complexity, but also provides the peak signal-noise ratio (PSNR) performance of lossy coding to be comparable to the SPIHT algorithm using RB-IWT filters, and better than the SPECK algorithm. Additionally, the new algorithm supports both efficiently lossy and lossless compression using a single bitstream. This presented algorithm is valuable for future remote sensing image compression.

  16. Study on the optimal algorithm prediction of corn leaf component information based on hyperspectral imaging

    NASA Astrophysics Data System (ADS)

    Wu, Qiong; Wang, Jihua; Wang, Cheng; Xu, Tongyu

    2016-09-01

    Genetic algorithm (GA) has a significant effect in the band optimization selection of Partial Least Squares (PLS) correction model. Application of genetic algorithm in selection of characteristic bands can achieve the optimal solution more rapidly, effectively improve measurement accuracy and reduce variables used for modeling. In this study, genetic algorithm as a module conducted band selection for the application of hyperspectral imaging in nondestructive testing of corn seedling leaves, and GA-PLS model was established. In addition, PLS quantitative model of full spectrum and experienced-spectrum region were established in order to suggest the feasibility of genetic algorithm optimizing wave bands, and model robustness was evaluated. There were 12 characteristic bands selected by genetic algorithm. With reflectance values of corn seedling component information at spectral characteristic wavelengths corresponding to 12 characteristic bands as variables, a model about SPAD values of corn leaves acquired was established by PLS, and modeling results showed r = 0.7825. The model results were better than those of PLS model established in full spectrum and experience-based selected bands. The results suggested that genetic algorithm can be used for data optimization and screening before establishing the corn seedling component information model by PLS method and effectively increase measurement accuracy and greatly reduce variables used for modeling.

  17. Fast phase unwrapping algorithm based on region partition for structured light vision measurement

    NASA Astrophysics Data System (ADS)

    Lu, Jun; Su, Hang

    2014-04-01

    Phase unwrapping is a key problem of phase-shifting profilometry vision measurement for complex object surface shapes. The simple path-following phase unwrapping algorithm is fast but has serious unwrapping error for complex shapes. The Goldstein+flood phase unwrapping algorithm can handle some complex shape object measurement; however, it is time consuming. We propose a fast phase unwrapping algorithm based on region partition according to a quality map of wrapped phase. In this algorithm, wrapped phase image is divided into several regions using partition thresholds, which are determined according to histogram of quality value. Each region is unwrapped by using a simple path-following phase algorithm and several groups with different priorities are generated. These groups are merged according to their priorities from high to low order and a final absolute phase is obtained. The proposed method is applied to wrapped phase images of three objects with and without noise. Experiments show that the proposed method is much faster, more accurate, and robust to noise than the Goldstein+flood algorithm in unwrapping complex phase image.

  18. Application of an enhanced fuzzy algorithm for MR brain tumor image segmentation

    NASA Astrophysics Data System (ADS)

    Hemanth, D. Jude; Vijila, C. Kezi Selva; Anitha, J.

    2010-02-01

    Image segmentation is one of the significant digital image processing techniques commonly used in the medical field. One of the specific applications is tumor detection in abnormal Magnetic Resonance (MR) brain images. Fuzzy approaches are widely preferred for tumor segmentation which generally yields superior results in terms of accuracy. But most of the fuzzy algorithms suffer from the drawback of slow convergence rate which makes the system practically non-feasible. In this work, the application of modified Fuzzy C-means (FCM) algorithm to tackle the convergence problem is explored in the context of brain image segmentation. This modified FCM algorithm employs the concept of quantization to improve the convergence rate besides yielding excellent segmentation efficiency. This algorithm is experimented on real time abnormal MR brain images collected from the radiologists. A comprehensive feature vector is extracted from these images and used for the segmentation technique. An extensive feature selection process is performed which reduces the convergence time period and improve the segmentation efficiency. After segmentation, the tumor portion is extracted from the segmented image. Comparative analysis in terms of segmentation efficiency and convergence rate is performed between the conventional FCM and the modified FCM. Experimental results show superior results for the modified FCM algorithm in terms of the performance measures. Thus, this work highlights the application of the modified algorithm for brain tumor detection in abnormal MR brain images.

  19. Use of Algorithm of Changes for Optimal Design of Heat Exchanger

    NASA Astrophysics Data System (ADS)

    Tam, S. C.; Tam, H. K.; Chio, C. H.; Tam, L. M.

    2010-05-01

    For economic reasons, the optimal design of heat exchanger is required. Design of heat exchanger is usually based on the iterative process. The design conditions, equipment geometries, the heat transfer and friction factor correlations are totally involved in the process. Using the traditional iterative method, many trials are needed for satisfying the compromise between the heat exchange performance and the cost consideration. The process is cumbersome and the optimal design is often depending on the design engineer's experience. Therefore, in the recent studies, many researchers, reviewed in [1], applied the genetic algorithm (GA) [2] for designing the heat exchanger. The results outperformed the traditional method. In this study, the alternative approach, algorithm of changes, is proposed for optimal design of shell-tube heat exchanger [3]. This new method, algorithm of changes based on I Ching (???), is developed originality by the author. In the algorithms, the hexagram operations in I Ching has been generalized to binary string case and the iterative procedure which imitates the I Ching inference is also defined. On the basis of [3], the shell inside diameter, tube outside diameter, and baffles spacing were treated as the design (or optimized) variables. The cost of the heat exchanger was arranged as the objective function. Through the case study, the results show that the algorithm of changes is comparable to the GA method. Both of method can find the optimal solution in a short time. However, without interchanging information between binary strings, the algorithm of changes has advantage on parallel computation over GA.

  20. A hybrid algorithm for robust acoustic source localization in noisy and reverberant environments

    NASA Astrophysics Data System (ADS)

    Rajagopalan, Ramesh; Dessonville, Timothy

    2014-09-01

    Acoustic source localization using microphone arrays is widely used in videoconferencing and surveillance systems. However, it still remains a challenging task to develop efficient algorithms for accurate estimation of source location using distributed data processing. In this work, we propose a new algorithm for efficient localization of a speaker in noisy and reverberant environments such as videoconferencing. We propose a hybrid algorithm that combines generalized cross correlation based phase transform method (GCC-PHAT) and Tabu search to obtain a robust and accurate estimate of the speaker location. The Tabu Search algorithm iteratively improves the time difference of arrival (TDOA) estimate of GCC-PHAT by examining the neighboring solutions until a convergence in the TDOA value is obtained. Experiments were performed based on real world data recorded from a meeting room in the presence of noise such as computer and fans. Our results demonstrate that the proposed hybrid algorithm outperforms GCC-PHAT especially when the noise level is high. This shows the robustness of the proposed algorithm in noisy and realistic videoconferencing systems.

  1. Log-linear model based behavior selection method for artificial fish swarm algorithm.

    PubMed

    Huang, Zhehuang; Chen, Yidong

    2015-01-01

    Artificial fish swarm algorithm (AFSA) is a population based optimization technique inspired by social behavior of fishes. In past several years, AFSA has been successfully applied in many research and application areas. The behavior of fishes has a crucial impact on the performance of AFSA, such as global exploration ability and convergence speed. How to construct and select behaviors of fishes are an important task. To solve these problems, an improved artificial fish swarm algorithm based on log-linear model is proposed and implemented in this paper. There are three main works. Firstly, we proposed a new behavior selection algorithm based on log-linear model which can enhance decision making ability of behavior selection. Secondly, adaptive movement behavior based on adaptive weight is presented, which can dynamically adjust according to the diversity of fishes. Finally, some new behaviors are defined and introduced into artificial fish swarm algorithm at the first time to improve global optimization capability. The experiments on high dimensional function optimization showed that the improved algorithm has more powerful global exploration ability and reasonable convergence speed compared with the standard artificial fish swarm algorithm.

  2. Recent Advancements in Lightning Jump Algorithm Work

    NASA Technical Reports Server (NTRS)

    Schultz, Christopher J.; Petersen, Walter A.; Carey, Lawrence D.

    2010-01-01

    In the past year, the primary objectives were to show the usefulness of total lightning as compared to traditional cloud-to-ground (CG) networks, test the lightning jump algorithm configurations in other regions of the country, increase the number of thunderstorms within our thunderstorm database, and to pinpoint environments that could prove difficult for any lightning jump configuration. A total of 561 thunderstorms have been examined in the past year (409 non-severe, 152 severe) from four regions of the country (North Alabama, Washington D.C., High Plains of CO/KS, and Oklahoma). Results continue to indicate that the 2 lightning jump algorithm configuration holds the most promise in terms of prospective operational lightning jump algorithms, with a probability of detection (POD) at 81%, a false alarm rate (FAR) of 45%, a critical success index (CSI) of 49% and a Heidke Skill Score (HSS) of 0.66. The second best performing algorithm configuration was the Threshold 4 algorithm, which had a POD of 72%, FAR of 51%, a CSI of 41% and an HSS of 0.58. Because a more complex algorithm configuration shows the most promise in terms of prospective operational lightning jump algorithms, accurate thunderstorm cell tracking work must be undertaken to track lightning trends on an individual thunderstorm basis over time. While these numbers for the 2 configuration are impressive, the algorithm does have its weaknesses. Specifically, low-topped and tropical cyclone thunderstorm environments are present issues for the 2 lightning jump algorithm, because of the suppressed vertical depth impact on overall flash counts (i.e., a relative dearth in lightning). For example, in a sample of 120 thunderstorms from northern Alabama that contained 72 missed events by the 2 algorithm 36% of the misses were associated with these two environments (17 storms).

  3. Fourth Order Algorithms for Solving Diverse Many-Body Problems

    NASA Astrophysics Data System (ADS)

    Chin, Siu A.; Forbert, Harald A.; Chen, Chia-Rong; Kidwell, Donald W.; Ciftja, Orion

    2001-03-01

    We show that the method of factorizing an evolution operator of the form e^ɛ(A+B) to fourth order with purely positive coefficient yields new classes of symplectic algorithms for solving classical dynamical problems, unitary algorithms for solving the time-dependent Schrödinger equation, norm preserving algorithms for solving the Langevin equation and large time step convergent Diffusion Monte Carlo algorithms. Results for each class of problems will be presented and disucss

  4. MU-MIMO Pairing Algorithm Using Received Power

    NASA Astrophysics Data System (ADS)

    Kim, Young-Joon; Lee, Jung-Seung; Baik, Doo-Kwon

    In this letter, a new received power pairing scheduling (PPS) algorithm is proposed for Multi User Multiple Input and Multiple Output (MU-MIMO) systems. In contrast to existing algorithms that manage complex orthogonal factors, the PPS algorithm simply utilizes CINR to determine a MU-MIMO pair. Simulation results show that the PPS algorithm achieves up to 77% of MU-MIMO gain of determinant pairing scheduling (DPS) with low complexity.

  5. A genetic algorithm for solving supply chain network design model

    NASA Astrophysics Data System (ADS)

    Firoozi, Z.; Ismail, N.; Ariafar, S. H.; Tang, S. H.; Ariffin, M. K. M. A.

    2013-09-01

    Network design is by nature costly and optimization models play significant role in reducing the unnecessary cost components of a distribution network. This study proposes a genetic algorithm to solve a distribution network design model. The structure of the chromosome in the proposed algorithm is defined in a novel way that in addition to producing feasible solutions, it also reduces the computational complexity of the algorithm. Computational results are presented to show the algorithm performance.

  6. Genetic Algorithm Tuned Fuzzy Logic for Gliding Return Trajectories

    NASA Technical Reports Server (NTRS)

    Burchett, Bradley T.

    2003-01-01

    The problem of designing and flying a trajectory for successful recovery of a reusable launch vehicle is tackled using fuzzy logic control with genetic algorithm optimization. The plant is approximated by a simplified three degree of freedom non-linear model. A baseline trajectory design and guidance algorithm consisting of several Mamdani type fuzzy controllers is tuned using a simple genetic algorithm. Preliminary results show that the performance of the overall system is shown to improve with genetic algorithm tuning.

  7. Novel Hierarchical Fall Detection Algorithm Using a Multiphase Fall Model

    PubMed Central

    Hsieh, Chia-Yeh; Liu, Kai-Chun; Huang, Chih-Ning; Chu, Woei-Chyn; Chan, Chia-Tai

    2017-01-01

    Falls are the primary cause of accidents for the elderly in the living environment. Reducing hazards in the living environment and performing exercises for training balance and muscles are the common strategies for fall prevention. However, falls cannot be avoided completely; fall detection provides an alarm that can decrease injuries or death caused by the lack of rescue. The automatic fall detection system has opportunities to provide real-time emergency alarms for improving the safety and quality of home healthcare services. Two common technical challenges are also tackled in order to provide a reliable fall detection algorithm, including variability and ambiguity. We propose a novel hierarchical fall detection algorithm involving threshold-based and knowledge-based approaches to detect a fall event. The threshold-based approach efficiently supports the detection and identification of fall events from continuous sensor data. A multiphase fall model is utilized, including free fall, impact, and rest phases for the knowledge-based approach, which identifies fall events and has the potential to deal with the aforementioned technical challenges of a fall detection system. Seven kinds of falls and seven types of daily activities arranged in an experiment are used to explore the performance of the proposed fall detection algorithm. The overall performances of the sensitivity, specificity, precision, and accuracy using a knowledge-based algorithm are 99.79%, 98.74%, 99.05% and 99.33%, respectively. The results show that the proposed novel hierarchical fall detection algorithm can cope with the variability and ambiguity of the technical challenges and fulfill the reliability, adaptability, and flexibility requirements of an automatic fall detection system with respect to the individual differences. PMID:28208694

  8. Quantum Algorithm for Linear Programming Problems

    NASA Astrophysics Data System (ADS)

    Joag, Pramod; Mehendale, Dhananjay

    The quantum algorithm (PRL 103, 150502, 2009) solves a system of linear equations with exponential speedup over existing classical algorithms. We show that the above algorithm can be readily adopted in the iterative algorithms for solving linear programming (LP) problems. The first iterative algorithm that we suggest for LP problem follows from duality theory. It consists of finding nonnegative solution of the equation forduality condition; forconstraints imposed by the given primal problem and for constraints imposed by its corresponding dual problem. This problem is called the problem of nonnegative least squares, or simply the NNLS problem. We use a well known method for solving the problem of NNLS due to Lawson and Hanson. This algorithm essentially consists of solving in each iterative step a new system of linear equations . The other iterative algorithms that can be used are those based on interior point methods. The same technique can be adopted for solving network flow problems as these problems can be readily formulated as LP problems. The suggested quantum algorithm cansolveLP problems and Network Flow problems of very large size involving millions of variables.

  9. A novel algorithm for Bluetooth ECG.

    PubMed

    Pandya, Utpal T; Desai, Uday B

    2012-11-01

    In wireless transmission of ECG, data latency will be significant when battery power level and data transmission distance are not maintained. In applications like home monitoring or personalized care, to overcome the joint effect of previous issues of wireless transmission and other ECG measurement noises, a novel filtering strategy is required. Here, a novel algorithm, identified as peak rejection adaptive sampling modified moving average (PRASMMA) algorithm for wireless ECG is introduced. This algorithm first removes error in bit pattern of received data if occurred in wireless transmission and then removes baseline drift. Afterward, a modified moving average is implemented except in the region of each QRS complexes. The algorithm also sets its filtering parameters according to different sampling rate selected for acquisition of signals. To demonstrate the work, a prototyped Bluetooth-based ECG module is used to capture ECG with different sampling rate and in different position of patient. This module transmits ECG wirelessly to Bluetooth-enabled devices where the PRASMMA algorithm is applied on captured ECG. The performance of PRASMMA algorithm is compared with moving average and S-Golay algorithms visually as well as numerically. The results show that the PRASMMA algorithm can significantly improve the ECG reconstruction by efficiently removing the noise and its use can be extended to any parameters where peaks are importance for diagnostic purpose.

  10. Global structual optimizations of surface systems with a genetic algorithm

    SciTech Connect

    Chuang, Feng-Chuan

    2005-01-01

    Global structural optimizations with a genetic algorithm were performed for atomic cluster and surface systems including aluminum atomic clusters, Si magic clusters on the Si(111) 7 x 7 surface, silicon high-index surfaces, and Ag-induced Si(111) reconstructions. First, the global structural optimizations of neutral aluminum clusters Aln algorithm in combination with tight-binding and first-principles calculations were performed to study the structures of magic clusters on the Si(111) 7 x 7 surface. Extensive calculations show that the magic cluster observed in scanning tunneling microscopy (STM) experiments consist of eight Si atoms. Simulated STM images of the Si magic cluster exhibit a ring-like feature similar to STM experiments. Third, a genetic algorithm coupled with a highly optimized empirical potential were used to determine the lowest energy structure of high-index semiconductor surfaces. The lowest energy structures of Si(105) and Si(114) were determined successfully. The results of Si(105) and Si(114) are reported within the framework of highly optimized empirical potential and first-principles calculations. Finally, a genetic algorithm coupled with Si and Ag tight-binding potentials were used to search for Ag-induced Si(111) reconstructions at various Ag and Si coverages. The optimized structural models of √3 x √3, 3 x 1, and 5 x 2 phases were reported using first-principles calculations. A novel model is found to have lower surface energy than the proposed double-honeycomb chained (DHC) model both for Au/Si(111) 5 x 2 and Ag/Si(111) 5 x 2 systems.

  11. Improving Search Algorithms by Using Intelligent Coordinates

    NASA Technical Reports Server (NTRS)

    Wolpert, David H.; Tumer, Kagan; Bandari, Esfandiar

    2004-01-01

    We consider algorithms that maximize a global function G in a distributed manner, using a different adaptive computational agent to set each variable of the underlying space. Each agent eta is self-interested; it sets its variable to maximize its own function g (sub eta). Three factors govern such a distributed algorithm's performance, related to exploration/exploitation, game theory, and machine learning. We demonstrate how to exploit alI three factors by modifying a search algorithm's exploration stage: rather than random exploration, each coordinate of the search space is now controlled by a separate machine-learning-based player engaged in a noncooperative game. Experiments demonstrate that this modification improves simulated annealing (SA) by up to an order of magnitude for bin packing and for a model of an economic process run over an underlying network. These experiments also reveal interesting small-world phenomena.

  12. Algorithm for Autonomous Landing

    NASA Technical Reports Server (NTRS)

    Kuwata, Yoshiaki

    2011-01-01

    Because of their small size, high maneuverability, and easy deployment, micro aerial vehicles (MAVs) are used for a wide variety of both civilian and military missions. One of their current drawbacks is the vast array of sensors (such as GPS, altimeter, radar, and the like) required to make a landing. Due to the MAV s small payload size, this is a major concern. Replacing the imaging sensors with a single monocular camera is sufficient to land a MAV. By applying optical flow algorithms to images obtained from the camera, time-to-collision can be measured. This is a measurement of position and velocity (but not of absolute distance), and can avoid obstacles as well as facilitate a landing on a flat surface given a set of initial conditions. The key to this approach is to calculate time-to-collision based on some image on the ground. By holding the angular velocity constant, horizontal speed decreases linearly with the height, resulting in a smooth landing. Mathematical proofs show that even with actuator saturation or modeling/ measurement uncertainties, MAVs can land safely. Landings of this nature may have a higher velocity than is desirable, but this can be compensated for by a cushioning or dampening system, or by using a system of legs to grab onto a surface. Such a monocular camera system can increase vehicle payload size (or correspondingly reduce vehicle size), increase speed of descent, and guarantee a safe landing by directly correlating speed to height from the ground.

  13. Two New PRP Conjugate Gradient Algorithms for Minimization Optimization Models.

    PubMed

    Yuan, Gonglin; Duan, Xiabin; Liu, Wenjie; Wang, Xiaoliang; Cui, Zengru; Sheng, Zhou

    2015-01-01

    Two new PRP conjugate Algorithms are proposed in this paper based on two modified PRP conjugate gradient methods: the first algorithm is proposed for solving unconstrained optimization problems, and the second algorithm is proposed for solving nonlinear equations. The first method contains two aspects of information: function value and gradient value. The two methods both possess some good properties, as follows: 1) βk ≥ 0 2) the search direction has the trust region property without the use of any line search method 3) the search direction has sufficient descent property without the use of any line search method. Under some suitable conditions, we establish the global convergence of the two algorithms. We conduct numerical experiments to evaluate our algorithms. The numerical results indicate that the first algorithm is effective and competitive for solving unconstrained optimization problems and that the second algorithm is effective for solving large-scale nonlinear equations.

  14. Two New PRP Conjugate Gradient Algorithms for Minimization Optimization Models

    PubMed Central

    Yuan, Gonglin; Duan, Xiabin; Liu, Wenjie; Wang, Xiaoliang; Cui, Zengru; Sheng, Zhou

    2015-01-01

    Two new PRP conjugate Algorithms are proposed in this paper based on two modified PRP conjugate gradient methods: the first algorithm is proposed for solving unconstrained optimization problems, and the second algorithm is proposed for solving nonlinear equations. The first method contains two aspects of information: function value and gradient value. The two methods both possess some good properties, as follows: 1)βk ≥ 0 2) the search direction has the trust region property without the use of any line search method 3) the search direction has sufficient descent property without the use of any line search method. Under some suitable conditions, we establish the global convergence of the two algorithms. We conduct numerical experiments to evaluate our algorithms. The numerical results indicate that the first algorithm is effective and competitive for solving unconstrained optimization problems and that the second algorithm is effective for solving large-scale nonlinear equations. PMID:26502409

  15. Specific PCR product primer design using memetic algorithm.

    PubMed

    Yang, Cheng-Hong; Cheng, Yu-Huei; Chuang, Li-Yeh; Chang, Hsueh-Wei

    2009-01-01

    To provide feasible primer sets for performing a polymerase chain reaction (PCR) experiment, many primer design methods have been proposed. However, the majority of these methods require a relatively long time to obtain an optimal solution since large quantities of template DNA need to be analyzed. Furthermore, the designed primer sets usually do not provide a specific PCR product size. In recent years, evolutionary computation has been applied to PCR primer design and yielded promising results. In this article, a memetic algorithm (MA) is proposed to solve primer design problems associated with providing a specific product size for PCR experiments. The MA is compared with a genetic algorithm (GA) using an accuracy formula to estimate the quality of the primer design and test the running time. Overall, 50 accession nucleotide sequences were sampled for the comparison of the accuracy of the GA and MA for primer design. Five hundred runs of the GA and MA primer design were performed with PCR product lengths of 150-300 bps and 500-800 bps, and two different methods of calculating T(m) for each accession nucleotide sequence were tested. A comparison of the accuracy results for the GA and MA primer design showed that the MA primer design yielded better results than the GA primer design. The results further indicate that the proposed method finds optimal or near-optimal primer sets and effective PCR products in a dry dock experiment. Related materials are available online at http://bio.kuas.edu.tw/ma-pd/.

  16. Children Show Selective Trust in Technological Informants

    ERIC Educational Resources Information Center

    Danovitch, Judith H.; Alzahabi, Reem

    2013-01-01

    Although children are often exposed to technological devices early in life, little is known about how they evaluate these novel sources of information. In two experiments, children aged 3, 4, and 5 years old ("n" = 92) were presented with accurate and inaccurate computer informants, and they subsequently relied on information provided by…

  17. The Demise of the Magic Lantern Show

    ERIC Educational Resources Information Center

    Schneider, Edward W.

    2006-01-01

    Extracting and applying lessons from history is rarely easy and sometimes risky but there are moments when historical records are so compelling that they rise above mere proof to the level of interocular impact. In this article, the author shares his similar experience while visiting his colleague, Professor Bruce Clark, at the University of…

  18. Dynamic Shortest Path Algorithms for Hypergraphs

    DTIC Science & Technology

    2014-01-01

    hypergraphs, energy efficient routing in multichannel multiradio networks, and the Enron email data set. The experiment with the Enron email data set...efficient routing inmultichannel multiradio networks, and the Enron email data set. The experiment with the Enron email data set illustrates the application...FOR HYPERGRAPHS 3 of each actor. In Section VII, we apply the proposed shortest hy- perpath algorithms to the Enron e-mail data set. We propose a

  19. Quantum hyperparallel algorithm for matrix multiplication.

    PubMed

    Zhang, Xin-Ding; Zhang, Xiao-Ming; Xue, Zheng-Yuan

    2016-04-29

    Hyperentangled states, entangled states with more than one degree of freedom, are considered as promising resource in quantum computation. Here we present a hyperparallel quantum algorithm for matrix multiplication with time complexity O(N(2)), which is better than the best known classical algorithm. In our scheme, an N dimensional vector is mapped to the state of a single source, which is separated to N paths. With the assistance of hyperentangled states, the inner product of two vectors can be calculated with a time complexity independent of dimension N. Our algorithm shows that hyperparallel quantum computation may provide a useful tool in quantum machine learning and "big data" analysis.

  20. Online clustering algorithms for radar emitter classification.

    PubMed

    Liu, Jun; Lee, Jim P Y; Senior; Li, Lingjie; Luo, Zhi-Quan; Wong, K Max

    2005-08-01

    Radar emitter classification is a special application of data clustering for classifying unknown radar emitters from received radar pulse samples. The main challenges of this task are the high dimensionality of radar pulse samples, small sample group size, and closely located radar pulse clusters. In this paper, two new online clustering algorithms are developed for radar emitter classification: One is model-based using the Minimum Description Length (MDL) criterion and the other is based on competitive learning. Computational complexity is analyzed for each algorithm and then compared. Simulation results show the superior performance of the model-based algorithm over competitive learning in terms of better classification accuracy, flexibility, and stability.

  1. Efficient Algorithms for Langevin and DPD Dynamics.

    PubMed

    Goga, N; Rzepiela, A J; de Vries, A H; Marrink, S J; Berendsen, H J C

    2012-10-09

    In this article, we present several algorithms for stochastic dynamics, including Langevin dynamics and different variants of Dissipative Particle Dynamics (DPD), applicable to systems with or without constraints. The algorithms are based on the impulsive application of friction and noise, thus avoiding the computational complexity of algorithms that apply continuous friction and noise. Simulation results on thermostat strength and diffusion properties for ideal gas, coarse-grained (MARTINI) water, and constrained atomic (SPC/E) water systems are discussed. We show that the measured thermal relaxation rates agree well with theoretical predictions. The influence of various parameters on the diffusion coefficient is discussed.

  2. Quantum hyperparallel algorithm for matrix multiplication

    NASA Astrophysics Data System (ADS)

    Zhang, Xin-Ding; Zhang, Xiao-Ming; Xue, Zheng-Yuan

    2016-04-01

    Hyperentangled states, entangled states with more than one degree of freedom, are considered as promising resource in quantum computation. Here we present a hyperparallel quantum algorithm for matrix multiplication with time complexity O(N2), which is better than the best known classical algorithm. In our scheme, an N dimensional vector is mapped to the state of a single source, which is separated to N paths. With the assistance of hyperentangled states, the inner product of two vectors can be calculated with a time complexity independent of dimension N. Our algorithm shows that hyperparallel quantum computation may provide a useful tool in quantum machine learning and “big data” analysis.

  3. Dual format algorithm implementation with gotcha data

    NASA Astrophysics Data System (ADS)

    Gorham, LeRoy A.; Rigling, Brian D.

    2012-05-01

    The Dual Format Algorithm (DFA) is an alternative to the Polar Format Algorithm (PFA) where the image is formed first to an arbitrary grid instead of a Cartesian grid. The arbitrary grid is specifically chosen to allow for more efficient application of defocus and distortion corrections that occur due to range curvature. We provide a description of the arbitrary image grid and show that the quadratic phase errors are isolated along a single dimension of the image. We describe an application of the DFA to circular SAR data and analyze the image focus. For an example SAR dataset, the DFA doubles the focused image size of the PFA algorithm with post imaging corrections.

  4. A positive detecting code and its decoding algorithm for DNA library screening.

    PubMed

    Uehara, Hiroaki; Jimbo, Masakazu

    2009-01-01

    The study of gene functions requires high-quality DNA libraries. However, a large number of tests and screenings are necessary for compiling such libraries. We describe an algorithm for extracting as much information as possible from pooling experiments for library screening. Collections of clones are called pools, and a pooling experiment is a group test for detecting all positive clones. The probability of positiveness for each clone is estimated according to the outcomes of the pooling experiments. Clones with high chance of positiveness are subjected to confirmatory testing. In this paper, we introduce a new positive clone detecting algorithm, called the Bayesian network pool result decoder (BNPD). The performance of BNPD is compared, by simulation, with that of the Markov chain pool result decoder (MCPD) proposed by Knill et al. in 1996. Moreover, the combinatorial properties of pooling designs suitable for the proposed algorithm are discussed in conjunction with combinatorial designs and d-disjunct matrices. We also show the advantage of utilizing packing designs or BIB designs for the BNPD algorithm.

  5. Aerosol retrieval experiments in the ESA Aerosol_cci project

    NASA Astrophysics Data System (ADS)

    Holzer-Popp, T.; de Leeuw, G.; Griesfeller, J.; Martynenko, D.; Klüser, L.; Bevan, S.; Davies, W.; Ducos, F.; Deuzé, J. L.; Graigner, R. G.; Heckel, A.; von Hoyningen-Hüne, W.; Kolmonen, P.; Litvinov, P.; North, P.; Poulsen, C. A.; Ramon, D.; Siddans, R.; Sogacheva, L.; Tanre, D.; Thomas, G. E.; Vountas, M.; Descloitres, J.; Griesfeller, J.; Kinne, S.; Schulz, M.; Pinnock, S.

    2013-08-01

    Within the ESA Climate Change Initiative (CCI) project Aerosol_cci (2010-2013), algorithms for the production of long-term total column aerosol optical depth (AOD) datasets from European Earth Observation sensors are developed. Starting with eight existing pre-cursor algorithms three analysis steps are conducted to improve and qualify the algorithms: (1) a series of experiments applied to one month of global data to understand several major sensitivities to assumptions needed due to the ill-posed nature of the underlying inversion problem, (2) a round robin exercise of "best" versions of each of these algorithms (defined using the step 1 outcome) applied to four months of global data to identify mature algorithms, and (3) a comprehensive validation exercise applied to one complete year of global data produced by the algorithms selected as mature based on the round robin exercise. The algorithms tested included four using AATSR, three using MERIS and one using PARASOL. This paper summarizes the first step. Three experiments were conducted to assess the potential impact of major assumptions in the various aerosol retrieval algorithms. In the first experiment a common set of four aerosol components was used to provide all algorithms with the same assumptions. The second experiment introduced an aerosol property climatology, derived from a combination of model and sun photometer observations, as a priori information in the retrievals on the occurrence of the common aerosol components. The third experiment assessed the impact of using a common nadir cloud mask for AATSR and MERIS algorithms in order to characterize the sensitivity to remaining cloud contamination in the retrievals against the baseline dataset versions. The impact of the algorithm changes was assessed for one month (September 2008) of data: qualitatively by inspection of monthly mean AOD maps and quantitatively by comparing daily gridded satellite data against daily averaged AERONET sun photometer

  6. Scheduling for the National Hockey League Using a Multi-objective Evolutionary Algorithm

    NASA Astrophysics Data System (ADS)

    Craig, Sam; While, Lyndon; Barone, Luigi

    We describe a multi-objective evolutionary algorithm that derives schedules for the National Hockey League according to three objectives: minimising the teams' total travel, promoting equity in rest time between games, and minimising long streaks of home or away games. Experiments show that the system is able to derive schedules that beat the 2008-9 NHL schedule in all objectives simultaneously, and that it returns a set of schedules that offer a range of trade-offs across the objectives.

  7. Dwell time algorithm for multi-mode optimization in manufacturing large optical mirrors

    NASA Astrophysics Data System (ADS)

    Liu, Zhenyu

    2014-08-01

    CCOS (Computer Controlled Optical Surfacing) is one of the most important method to manufacture optical surface. By controlling the dwell time of a polishing tool on the mirror we can get the desired material removal. As the optical surface becoming larger, traditional CCOS method can't meet the demand that manufacturing the mirror in higher efficiency and precision. This paper presents a new method using multi-mode optimization. By calculate the dwell time map of different tool in one optimization cycle, the larger tool and the small one have complementary advantages and obtain a global optimization for multi tool and multi-processing cycles. To calculate the dwell time of different tool at the same time we use multi-mode dwell time algorithm that based on matrix calculation. With this algorithm we did simulation experiment, the result shows using multi-mode optimization algorithm can improve the efficiency maintaining good precision.

  8. The Search for Effective Algorithms for Recovery from Loss of Separation

    NASA Technical Reports Server (NTRS)

    Butler, Ricky W.; Hagen, George E.; Maddalon, Jeffrey M.; Munoz, Cesar A.; Narawicz, Anthony J.

    2012-01-01

    Our previous work presented an approach for developing high confidence algorithms for recovering aircraft from loss of separation situations. The correctness theorems for the algorithms relied on several key assumptions, namely that state data for all local aircraft is perfectly known, that resolution maneuvers can be achieved instantaneously, and that all aircraft compute resolutions using exactly the same data. Experiments showed that these assumptions were adequate in cases where the aircraft are far away from losing separation, but are insufficient when the aircraft have already lost separation. This paper describes the results of this experimentation and proposes a new criteria specification for loss of separation recovery that preserves the formal safety properties of the previous criteria while overcoming some key limitations. Candidate algorithms that satisfy the new criteria are presented.

  9. New algorithm for efficient pattern recall using a static threshold with the Steinbuch Lernmatrix

    NASA Astrophysics Data System (ADS)

    Juan Carbajal Hernández, José; Sánchez Fernández, Luis P.

    2011-03-01

    An associative memory is a binary relationship between inputs and outputs, which is stored in an M matrix. The fundamental purpose of an associative memory is to recover correct output patterns from input patterns, which can be altered by additive, subtractive or combined noise. The Steinbuch Lernmatrix was the first associative memory developed in 1961, and is used as a pattern recognition classifier. However, a misclassification problem is presented when crossbar saturation occurs. A new algorithm that corrects the misclassification in the Lernmatrix is proposed in this work. The results of crossbar saturation with fundamental patterns demonstrate a better performance of pattern recalling using the new algorithm. Experiments with real data show a more efficient classifier when the algorithm is introduced in the original Lernmatrix. Therefore, the thresholded Lernmatrix memory emerges as a suitable and alternative classifier to be used in the developing pattern processing field.

  10. EM algorithm applied for estimating non-stationary region boundaries using electrical impedance tomography

    NASA Astrophysics Data System (ADS)

    Khambampati, A. K.; Rashid, A.; Kim, B. S.; Liu, Dong; Kim, S.; Kim, K. Y.

    2010-04-01

    EIT has been used for the dynamic estimation of organ boundaries. One specific application in this context is the estimation of lung boundaries during pulmonary circulation. This would help track the size and shape of lungs of the patients suffering from diseases like pulmonary edema and acute respiratory failure (ARF). The dynamic boundary estimation of the lungs can also be utilized to set and control the air volume and pressure delivered to the patients during artificial ventilation. In this paper, the expectation-maximization (EM) algorithm is used as an inverse algorithm to estimate the non-stationary lung boundary. The uncertainties caused in Kalman-type filters due to inaccurate selection of model parameters are overcome using EM algorithm. Numerical experiments using chest shaped geometry are carried out with proposed method and the performance is compared with extended Kalman filter (EKF). Results show superior performance of EM in estimation of the lung boundary.

  11. Liver Segmentation Based on Snakes Model and Improved GrowCut Algorithm in Abdominal CT Image

    PubMed Central

    He, Baochun; Ma, Zhiyuan; Zong, Mao; Zhou, Xiangrong; Fujita, Hiroshi

    2013-01-01

    A novel method based on Snakes Model and GrowCut algorithm is proposed to segment liver region in abdominal CT images. First, according to the traditional GrowCut method, a pretreatment process using K-means algorithm is conducted to reduce the running time. Then, the segmentation result of our improved GrowCut approach is used as an initial contour for the future precise segmentation based on Snakes model. At last, several experiments are carried out to demonstrate the performance of our proposed approach and some comparisons are conducted between the traditional GrowCut algorithm. Experimental results show that the improved approach not only has a better robustness and precision but also is more efficient than the traditional GrowCut method. PMID:24066017

  12. Memetic algorithm-based multi-objective coverage optimization for wireless sensor networks.

    PubMed

    Chen, Zhi; Li, Shuai; Yue, Wenjing

    2014-10-30

    Maintaining effective coverage and extending the network lifetime as much as possible has become one of the most critical issues in the coverage of WSNs. In this paper, we propose a multi-objective coverage optimization algorithm for WSNs, namely MOCADMA, which models the coverage control of WSNs as the multi-objective optimization problem. MOCADMA uses a memetic algorithm with a dynamic local search strategy to optimize the coverage of WSNs and achieve the objectives such as high network coverage, effective node utilization and more residual energy. In MOCADMA, the alternative solutions are represented as the chromosomes in matrix form, and the optimal solutions are selected through numerous iterations of the evolution process, including selection, crossover, mutation, local enhancement, and fitness evaluation. The experiment and evaluation results show MOCADMA can have good capabilities in maintaining the sensing coverage, achieve higher network coverage while improving the energy efficiency and effectively prolonging the network lifetime, and have a significant improvement over some existing algorithms.

  13. 3D protein structure prediction using Imperialist Competitive algorithm and half sphere exposure prediction.

    PubMed

    Khaji, Erfan; Karami, Masoumeh; Garkani-Nejad, Zahra

    2016-02-21

    Predicting the native structure of proteins based on half-sphere exposure and contact numbers has been studied deeply within recent years. Online predictors of these vectors and secondary structures of amino acids sequences have made it possible to design a function for the folding process. By choosing variant structures and directs for each secondary structure, a random conformation can be generated, and a potential function can then be assigned. Minimizing the potential function utilizing meta-heuristic algorithms is the final step of finding the native structure of a given amino acid sequence. In this work, Imperialist Competitive algorithm was used in order to accelerate the process of minimization. Moreover, we applied an adaptive procedure to apply revolutionary changes. Finally, we considered a more accurate tool for prediction of secondary structure. The results of the computational experiments on standard benchmark show the superiority of the new algorithm over the previous methods with similar potential function.

  14. EARLY EXPERIENCE WITH A HYBRID PROCESSOR: K-MEANS CLUSTERING

    SciTech Connect

    M. GOKHALE; ET AL

    2001-02-01

    We discuss hardware/software coprocessing on a hybrid processor for a compute- and data-intensive hyper-spectral imaging algorithm, K-Means Clustering. The experiments are performed on the Altera Excalibur board using the soft IP core 32-bit NIOS RISC processor. In our experiments, we compare performance of the sequential algorithm with two different accelerated versions. We consider granularity and synchronization issues when mapping an algorithm to a hybrid processor. Our results show that on the Excalibur NIOS, a 15% speedup can be achieved over the sequential algorithm on images with 8 spectral bands where the pixels are divided into 8 categories. Speedup is limited by the communication cost of transferring data from external memory through the NIOS processor to the customized circuits. Our results indicate that future hybrid processors must either (1) have a clock rate 10X the speed of the configurable logic circuits or (2) include dual port memories that both the processor and configurable logic can access. If either of these conditions is met, the hybrid processor will show a factor of 10 speedup over the sequential algorithm. Such systems will combine the convenience of conventional processors with the speed of configurable logic.

  15. Diagnostic Algorithm Benchmarking

    NASA Technical Reports Server (NTRS)

    Poll, Scott

    2011-01-01

    A poster for the NASA Aviation Safety Program Annual Technical Meeting. It describes empirical benchmarking on diagnostic algorithms using data from the ADAPT Electrical Power System testbed and a diagnostic software framework.

  16. Designing a parallel evolutionary algorithm for inferring gene networks on the cloud computing environment

    PubMed Central

    2014-01-01

    Background To improve the tedious task of reconstructing gene networks through testing experimentally the possible interactions between genes, it becomes a trend to adopt the automated reverse engineering procedure instead. Some evolutionary algorithms have been suggested for deriving network parameters. However, to infer large networks by the evolutionary algorithm, it is necessary to address two important issues: premature convergence and high computational cost. To tackle the former problem and to enhance the performance of traditional evolutionary algorithms, it is advisable to use parallel model evolutionary algorithms. To overcome the latter and to speed up the computation, it is advocated to adopt the mechanism of cloud computing as a promising solution: most popular is the method of MapReduce programming model, a fault-tolerant framework to implement parallel algorithms for inferring large gene networks. Results This work presents a practical framework to infer large gene networks, by developing and parallelizing a hybrid GA-PSO optimization method. Our parallel method is extended to work with the Hadoop MapReduce programming model and is executed in different cloud computing environments. To evaluate the proposed approach, we use a well-known open-source software GeneNetWeaver to create several yeast S. cerevisiae sub-networks and use them to produce gene profiles. Experiments have been conducted and the results have been analyzed. They show that our parallel approach can be successfully used to infer networks with desired behaviors and the computation time can be largely reduced. Conclusions Parallel population-based algorithms can effectively determine network parameters and they perform better than the widely-used sequential algorithms in gene network inference. These parallel algorithms can be distributed to the cloud computing environment to speed up the computation. By coupling the parallel model population-based optimization method and the parallel

  17. Implementations of back propagation algorithm in ecosystems applications

    NASA Astrophysics Data System (ADS)

    Ali, Khalda F.; Sulaiman, Riza; Elamir, Amir Mohamed

    2015-05-01

    Artificial Neural Networks (ANNs) have been applied to an increasing number of real world problems of considerable complexity. Their most important advantage is in solving problems which are too complex for conventional technologies, that do not have an algorithmic solutions or their algorithmic Solutions is too complex to be found. In general, because of their abstraction from the biological brain, ANNs are developed from concept that evolved in the late twentieth century neuro-physiological experiments on the cells of the human brain to overcome the perceived inadequacies with conventional ecological data analysis methods. ANNs have gained increasing attention in ecosystems applications, because of ANN's capacity to detect patterns in data through non-linear relationships, this characteristic confers them a superior predictive ability. In this research, ANNs is applied in an ecological system analysis. The neural networks use the well known Back Propagation (BP) Algorithm with the Delta Rule for adaptation of the system. The Back Propagation (BP) training Algorithm is an effective analytical method for adaptation of the ecosystems applications, the main reason because of their capacity to detect patterns in data through non-linear relationships. This characteristic confers them a superior predicting ability. The BP algorithm uses supervised learning, which means that we provide the algorithm with examples of the inputs and outputs we want the network to compute, and then the error is calculated. The idea of the back propagation algorithm is to reduce this error, until the ANNs learns the training data. The training begins with random weights, and the goal is to adjust them so that the error will be minimal. This research evaluated the use of artificial neural networks (ANNs) techniques in an ecological system analysis and modeling. The experimental results from this research demonstrate that an artificial neural network system can be trained to act as an expert

  18. Adaptive phase aberration correction based on imperialist competitive algorithm.

    PubMed

    Yazdani, R; Hajimahmoodzadeh, M; Fallah, H R

    2014-01-01

    We investigate numerically the feasibility of phase aberration correction in a wavefront sensorless adaptive optical system, based on the imperialist competitive algorithm (ICA). Considering a 61-element deformable mirror (DM) and the Strehl ratio as the cost function of ICA, this algorithm is employed to search the optimum surface profile of DM for correcting the phase aberrations in a solid-state laser system. The correction results show that ICA is a powerful correction algorithm for static or slowly changing phase aberrations in optical systems, such as solid-state lasers. The correction capability and the convergence speed of this algorithm are compared with those of the genetic algorithm (GA) and stochastic parallel gradient descent (SPGD) algorithm. The results indicate that these algorithms have almost the same correction capability. Also, ICA and GA are almost the same in convergence speed and SPGD is the fastest of these algorithms.

  19. [An Algorithm for Correcting Fetal Heart Rate Baseline].

    PubMed

    Li, Xiaodong; Lu, Yaosheng

    2015-10-01

    Fetal heart rate (FHR) baseline estimation is of significance for the computerized analysis of fetal heart rate and the assessment of fetal state. In our work, a fetal heart rate baseline correction algorithm was presented to make the existing baseline more accurate and fit to the tracings. Firstly, the deviation of the existing FHR baseline was found and corrected. And then a new baseline was obtained finally after treatment with some smoothing methods. To assess the performance of FHR baseline correction algorithm, a new FHR baseline estimation algorithm that combined baseline estimation algorithm and the baseline correction algorithm was compared with two existing FHR baseline estimation algorithms. The results showed that the new FHR baseline estimation algorithm did well in both accuracy and efficiency. And the results also proved the effectiveness of the FHR baseline correction algorithm.

  20. Imaging reconstruction based on improved wavelet denoising combined with parallel-beam filtered back-projection algorithm

    NASA Astrophysics Data System (ADS)

    Ren, Zhong; Liu, Guodong; Huang, Zhen

    2012-11-01

    The image reconstruction is a key step in medical imaging (MI) and its algorithm's performance determinates the quality and resolution of reconstructed image. Although some algorithms have been used, filter back-projection (FBP) algorithm is still the classical and commonly-used algorithm in clinical MI. In FBP algorithm, filtering of original projection data is a key step in order to overcome artifact of the reconstructed image. Since simple using of classical filters, such as Shepp-Logan (SL), Ram-Lak (RL) filter have some drawbacks and limitations in practice, especially for the projection data polluted by non-stationary random noises. So, an improved wavelet denoising combined with parallel-beam FBP algorithm is used to enhance the quality of reconstructed image in this paper. In the experiments, the reconstructed effects were compared between the improved wavelet denoising and others (directly FBP, mean filter combined FBP and median filter combined FBP method). To determine the optimum reconstruction effect, different algorithms, and different wavelet bases combined with three filters were respectively test. Experimental results show the reconstruction effect of improved FBP algorithm is better than that of others. Comparing the results of different algorithms based on two evaluation standards i.e. mean-square error (MSE), peak-to-peak signal-noise ratio (PSNR), it was found that the reconstructed effects of the improved FBP based on db2 and Hanning filter at decomposition scale 2 was best, its MSE value was less and the PSNR value was higher than others. Therefore, this improved FBP algorithm has potential value in the medical imaging.

  1. OpenEIS Algorithms

    SciTech Connect

    2013-07-29

    The OpenEIS Algorithm package seeks to provide a low-risk path for building owners, service providers and managers to explore analytical methods for improving building control and operational efficiency. Users of this software can analyze building data, and learn how commercial implementations would provide long-term value. The code also serves as a reference implementation for developers who wish to adapt the algorithms for use in commercial tools or service offerings.

  2. Implementation of Parallel Algorithms

    DTIC Science & Technology

    1993-06-30

    their socia ’ relations or to achieve some goals. For example, we define a pair-wise force law of i epulsion and attraction for a group of identical...quantization based compression schemes. Photo-refractive crystals, which provide high density recording in real time, are used as our holographic media . The...of Parallel Algorithms (J. Reif, ed.). Kluwer Academic Pu’ ishers, 1993. (4) "A Dynamic Separator Algorithm", D. Armon and J. Reif. To appear in

  3. The Superior Lambert Algorithm

    NASA Astrophysics Data System (ADS)

    der, G.

    2011-09-01

    Lambert algorithms are used extensively for initial orbit determination, mission planning, space debris correlation, and missile targeting, just to name a few applications. Due to the significance of the Lambert problem in Astrodynamics, Gauss, Battin, Godal, Lancaster, Gooding, Sun and many others (References 1 to 15) have provided numerous formulations leading to various analytic solutions and iterative methods. Most Lambert algorithms and their computer programs can only work within one revolution, break down or converge slowly when the transfer angle is near zero or 180 degrees, and their multi-revolution limitations are either ignored or barely addressed. Despite claims of robustness, many Lambert algorithms fail without notice, and the users seldom have a clue why. The DerAstrodynamics lambert2 algorithm, which is based on the analytic solution formulated by Sun, works for any number of revolutions and converges rapidly at any transfer angle. It provides significant capability enhancements over every other Lambert algorithm in use today. These include improved speed, accuracy, robustness, and multirevolution capabilities as well as implementation simplicity. Additionally, the lambert2 algorithm provides a powerful tool for solving the angles-only problem without artificial singularities (pointed out by Gooding in Reference 16), which involves 3 lines of sight captured by optical sensors, or systems such as the Air Force Space Surveillance System (AFSSS). The analytic solution is derived from the extended Godal’s time equation by Sun, while the iterative method of solution is that of Laguerre, modified for robustness. The Keplerian solution of a Lambert algorithm can be extended to include the non-Keplerian terms of the Vinti algorithm via a simple targeting technique (References 17 to 19). Accurate analytic non-Keplerian trajectories can be predicted for satellites and ballistic missiles, while performing at least 100 times faster in speed than most

  4. TRIO experiment

    SciTech Connect

    Clemmer, R.G.; Finn, P.A.; Malecha, R.F.; Misra, B.; Billone, M.C.; Bowers, D.L.; Fischer, A.K.; Greenwood, L.R.; Mattas, R.F.; Tam, S.W.

    1984-09-01

    The TRIO experiment is a test of in-situ tritium recovery and heat transfer performance of a miniaturized solid breeder blanket assembly. The assembly (capsule) was monitored for temperature and neutron flux profiles during irradiation and a sweep gas flowed through the capsule to an anaytical train wherein the amounts of tritium in its various chemical forms were determined. The capsule was designed to operate at different temperatures and sweep gas conditions. At the end of the experiment the amount of tritium retained in the solid was at a concentration of less than 0.1 wppM. More than 99.9% of tritium generated during the experiment was successfully recovered. The results of the experiment showed that the tritium inventories at the beginning and at the end of the experiment follow a relationship which appears to be characteristic of intragranular diffusion.

  5. Genetic Algorithms, Floating Point Numbers and Applications

    NASA Astrophysics Data System (ADS)

    Hardy, Yorick; Steeb, Willi-Hans; Stoop, Ruedi

    The core in most genetic algorithms is the bitwise manipulations of bit strings. We show that one can directly manipulate the bits in floating point numbers. This means the main bitwise operations in genetic algorithm mutations and crossings are directly done inside the floating point number. Thus the interval under consideration does not need to be known in advance. For applications, we consider the roots of polynomials and finding solutions of linear equations.

  6. DNA-based watermarks using the DNA-Crypt algorithm

    PubMed Central

    Heider, Dominik; Barnekow, Angelika

    2007-01-01

    Background The aim of this paper is to demonstrate the application of watermarks based on DNA sequences to identify the unauthorized use of genetically modified organisms (GMOs) protected by patents. Predicted mutations in the genome can be corrected by the DNA-Crypt program leaving the encrypted information intact. Existing DNA cryptographic and steganographic algorithms use synthetic DNA sequences to store binary information however, although these sequences can be used for authentication, they may change the target DNA sequence when introduced into living organisms. Results The DNA-Crypt algorithm and image steganography are based on the same watermark-hiding principle, namely using the least significant base in case of DNA-Crypt and the least significant bit in case of the image steganography. It can be combined with binary encryption algorithms like AES, RSA or Blowfish. DNA-Crypt is able to correct mutations in the target DNA with several mutation correction codes such as the Hamming-code or the WDH-code. Mutations which can occur infrequently may destroy the encrypted information, however an integrated fuzzy controller decides on a set of heuristics based on three input dimensions, and recommends whether or not to use a correction code. These three input dimensions are the length of the sequence, the individual mutation rate and the stability over time, which is represented by the number of generations. In silico experiments using the Ypt7 in Saccharomyces cerevisiae shows that the DNA watermarks produced by DNA-Crypt do not alter the translation of mRNA into protein. Conclusion The program is able to store watermarks in living organisms and can maintain the original information by correcting mutations itself. Pairwise or multiple sequence alignments show that DNA-Crypt produces few mismatches between the sequences similar to all steganographic algorithms. PMID:17535434

  7. Combined string searching algorithm based on knuth-morris- pratt and boyer-moore algorithms

    NASA Astrophysics Data System (ADS)

    Tsarev, R. Yu; Chernigovskiy, A. S.; Tsareva, E. A.; Brezitskaya, V. V.; Nikiforov, A. Yu; Smirnov, N. A.

    2016-04-01

    The string searching task can be classified as a classic information processing task. Users either encounter the solution of this task while working with text processors or browsers, employing standard built-in tools, or this task is solved unseen by the users, while they are working with various computer programmes. Nowadays there are many algorithms for solving the string searching problem. The main criterion of these algorithms’ effectiveness is searching speed. The larger the shift of the pattern relative to the string in case of pattern and string characters’ mismatch is, the higher is the algorithm running speed. This article offers a combined algorithm, which has been developed on the basis of well-known Knuth-Morris-Pratt and Boyer-Moore string searching algorithms. These algorithms are based on two different basic principles of pattern matching. Knuth-Morris-Pratt algorithm is based upon forward pattern matching and Boyer-Moore is based upon backward pattern matching. Having united these two algorithms, the combined algorithm allows acquiring the larger shift in case of pattern and string characters’ mismatch. The article provides an example, which illustrates the results of Boyer-Moore and Knuth-Morris- Pratt algorithms and combined algorithm’s work and shows advantage of the latter in solving string searching problem.

  8. Parallel LU-factorization algorithms for dense matrices

    SciTech Connect

    Oppe, T.C.; Kincaid, D.R.

    1987-05-01

    Several serial and parallel algorithms for computing the LU-factorization of a dense matrix are investigated. Numerical experiments and programming considerations to reduce bank conflicts on the Cray X-MP4 parallel computer are presented. Speedup factors are given for the parallel algorithms. 15 refs., 6 tabs.

  9. Making Sense of the Traditional Long Division Algorithm

    ERIC Educational Resources Information Center

    Lee, Ji-Eun

    2007-01-01

    This classroom scholarship report presents a group of elementary students' experiences learning the traditional long division algorithm. The traditional long division algorithm is often taught mechanically, resulting in the student's performance of step-by-step procedures with no or weak understanding of the concept. While noting some initial…

  10. Comparative Study of Two Automatic Registration Algorithms

    NASA Astrophysics Data System (ADS)

    Grant, D.; Bethel, J.; Crawford, M.

    2013-10-01

    The Iterative Closest Point (ICP) algorithm is prevalent for the automatic fine registration of overlapping pairs of terrestrial laser scanning (TLS) data. This method along with its vast number of variants, obtains the least squares parameters that are necessary to align the TLS data by minimizing some distance metric between the scans. The ICP algorithm uses a "model-data" concept in which the scans obtain differential treatment in the registration process depending on whether they were assigned to be the "model" or "data". For each of the "data" points, corresponding points from the "model" are sought. Another concept of "symmetric correspondence" was proposed in the Point-to-Plane (P2P) algorithm, where both scans are treated equally in the registration process. The P2P method establishes correspondences on both scans and minimizes the point-to-plane distances between the scans by simultaneously considering the stochastic properties of both scans. This paper studies both the ICP and P2P algorithms in terms of their consistency in registration parameters for pairs of TLS data. The question being investigated in this paper is, should scan A be registered to scan B, will the parameters be the same if scan B were registered to scan A? Experiments were conducted with eight pairs of real TLS data which were registered by the two algorithms in the forward (scan A to scan B) and backward (scan B to scan A) modes and the results were compared. The P2P algorithm was found to be more consistent than the ICP algorithm. The differences in registration accuracy between the forward and backward modes were negligible when using the P2P algorithm (mean difference of 0.03 mm). However, the ICP had a mean difference of 4.26 mm. Each scan was also transformed by the forward and backward parameters of the two algorithms and the misclosure computed. The mean misclosure for the P2P algorithm was 0.80 mm while that for the ICP algorithm was 5.39 mm. The conclusion from this study is

  11. Lightning detection and exposure algorithms for smartphones

    NASA Astrophysics Data System (ADS)

    Wang, Haixin; Shao, Xiaopeng; Wang, Lin; Su, Laili; Huang, Yining

    2015-05-01

    This study focuses on the key theory of lightning detection, exposure and the experiments. Firstly, the algorithm based on differential operation between two adjacent frames is selected to remove the lightning background information and extract lighting signal, and the threshold detection algorithm is applied to achieve the purpose of precise detection of lightning. Secondly, an algorithm is proposed to obtain scene exposure value, which can automatically detect external illumination status. Subsequently, a look-up table could be built on the basis of the relationships between the exposure value and average image brightness to achieve rapid automatic exposure. Finally, based on a USB 3.0 industrial camera including a CMOS imaging sensor, a set of hardware test platform is established and experiments are carried out on this platform to verify the performances of the proposed algorithms. The algorithms can effectively and fast capture clear lightning pictures such as special nighttime scenes, which will provide beneficial supporting to the smartphone industry, since the current exposure methods in smartphones often lost capture or induce overexposed or underexposed pictures.

  12. Autonomous learning based on cost assumptions: theoretical studies and experiments in robot control.

    PubMed

    Ribeiro, C H; Hemerly, E M

    1999-06-01

    Autonomous learning techniques are based on experience acquisition. In most realistic applications, experience is time-consuming: it implies sensor reading, actuator control and algorithmic update, constrained by the learning system dynamics. The information crudeness upon which classical learning algorithms operate make such problems too difficult and unrealistic. Nonetheless, additional information for facilitating the learning process ideally should be embedded in such a way that the structural, well-studied characteristics of these fundamental algorithms are maintained. We investigate in this article a more general formulation of the Q-learning method that allows for a spreading of information derived from single updates towards a neighbourhood of the instantly visited state and converges to optimality. We show how this new formulation can be used as a mechanism to safely embed prior knowledge about the structure of the state space, and demonstrate it in a modified implementation of a reinforcement learning algorithm in a real robot navigation task.

  13. Autonomous learning based on cost assumptions: theoretical studies and experiments in robot control.

    PubMed

    Ribeiro, C H; Hemerly, E M

    2000-02-01

    Autonomous learning techniques are based on experience acquisition. In most realistic applications, experience is time-consuming: it implies sensor reading, actuator control and algorithmic update, constrained by the learning system dynamics. The information crudeness upon which classical learning algorithms operate make such problems too difficult and unrealistic. Nonetheless, additional information for facilitating the learning process ideally should be embedded in such a way that the structural, well-studied characteristics of these fundamental algorithms are maintained. We investigate in this article a more general formulation of the Q-learning method that allows for a spreading of information derived from single updates towards a neighbourhood of the instantly visited state and converges to optimality. We show how this new formulation can be used as a mechanism to safely embed prior knowledge about the structure of the state space, and demonstrate it in a modified implementation of a reinforcement learning algorithm in a real robot navigation task.

  14. A parallel algorithm for the eigenvalues and eigenvectors for a general complex matrix

    NASA Technical Reports Server (NTRS)

    Shroff, Gautam

    1989-01-01

    A new parallel Jacobi-like algorithm is developed for computing the eigenvalues of a general complex matrix. Most parallel methods for this parallel typically display only linear convergence. Sequential norm-reducing algorithms also exit and they display quadratic convergence in most cases. The new algorithm is a parallel form of the norm-reducing algorithm due to Eberlein. It is proven that the asymptotic convergence rate of this algorithm is quadratic. Numerical experiments are presented which demonstrate the quadratic convergence of the algorithm and certain situations where the convergence is slow are also identified. The algorithm promises to be very competitive on a variety of parallel architectures.

  15. Quantum Adiabatic Algorithms and Large Spin Tunnelling

    NASA Technical Reports Server (NTRS)

    Boulatov, A.; Smelyanskiy, V. N.

    2003-01-01

    We provide a theoretical study of the quantum adiabatic evolution algorithm with different evolution paths proposed in this paper. The algorithm is applied to a random binary optimization problem (a version of the 3-Satisfiability problem) where the n-bit cost function is symmetric with respect to the permutation of individual bits. The evolution paths are produced, using the generic control Hamiltonians H (r) that preserve the bit symmetry of the underlying optimization problem. In the case where the ground state of H(0) coincides with the totally-symmetric state of an n-qubit system the algorithm dynamics is completely described in terms of the motion of a spin-n/2. We show that different control Hamiltonians can be parameterized by a set of independent parameters that are expansion coefficients of H (r) in a certain universal set of operators. Only one of these operators can be responsible for avoiding the tunnelling in the spin-n/2 system during the quantum adiabatic algorithm. We show that it is possible to select a coefficient for this operator that guarantees a polynomial complexity of the algorithm for all problem instances. We show that a successful evolution path of the algorithm always corresponds to the trajectory of a classical spin-n/2 and provide a complete characterization of such paths.

  16. Neural-Network-Biased Genetic Algorithms for Materials Design: Evolutionary Algorithms That Learn.

    PubMed

    Patra, Tarak K; Meenakshisundaram, Venkatesh; Hung, Jui-Hsiang; Simmons, David S

    2017-02-13

    Machine learning has the potential to dramatically accelerate high-throughput approaches to materials design, as demonstrated by successes in biomolecular design and hard materials design. However, in the search for new soft materials exhibiting properties and performance beyond those previously achieved, machine learning approaches are frequently limited by two shortcomings. First, because they are intrinsically interpolative, they are better suited to the optimization of properties within the known range of accessible behavior than to the discovery of new materials with extremal behavior. Second, they require large pre-existing data sets, which are frequently unavailable and prohibitively expensive to produce. Here we describe a new strategy, the neural-network-biased genetic algorithm (NBGA), for combining genetic algorithms, machine learning, and high-throughput computation or experiment to discover materials with extremal properties in the absence of pre-existing data. Within this strategy, predictions from a progressively constructed artificial neural network are employed to bias the evolution of a genetic algorithm, with fitness evaluations performed via direct simulation or experiment. In effect, this strategy gives the evolutionary algorithm the ability to "learn" and draw inferences from its experience to accelerate the evolutionary process. We test this algorithm against several standard optimization problems and polymer design problems and demonstrate that it matches and typically exceeds the efficiency and reproducibility of standard approaches including a direct-evaluation genetic algorithm and a neural-network-evaluated genetic algorithm. The success of this algorithm in a range of test problems indicates that the NBGA provides a robust strategy for employing informatics-accelerated high-throughput methods to accelerate materials design in the absence of pre-existing data.

  17. The Physics of Equestrian Show Jumping

    ERIC Educational Resources Information Center

    Stinner, Art

    2014-01-01

    This article discusses the kinematics and dynamics of equestrian show jumping. For some time I have attended a series of show jumping events at Spruce Meadows, an international equestrian center near Calgary, Alberta, often referred to as the "Wimbledon of equestrian jumping." I have always had a desire to write an article such as this…

  18. Serving Up Activities for TV Cooking Shows.

    ERIC Educational Resources Information Center

    Katchen, Johanna E.

    This paper documents a presentation given on the use of English-language television cooking shows in English-as-a-Second-Language (ESL) and English-as-a-Foreign-Language (EFL) classrooms in Taiwan. Such shows can be ideal for classroom use, since they have a predictable structure consisting of short segments, are of interest to most students,…

  19. 47 CFR 90.505 - Showing required.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... MOBILE RADIO SERVICES Developmental Operation § 90.505 Showing required. (a) Except as provided in paragraph (b) of this section, each application for developmental operation shall be accompanied by a showing that: (1) The applicant has an organized plan of development leading to a specific objective;...

  20. The Language of Show Biz: A Dictionary.

    ERIC Educational Resources Information Center

    Sergel, Sherman Louis, Ed.

    This dictionary of the language of show biz provides the layman with definitions and essays on terms and expressions often used in show business. The overall pattern of selection was intended to be more rather than less inclusive, though radio, television, and film terms were deliberately omitted. Lengthy explanations are sometimes used to express…