Sample records for choice translating algorithm

  1. Research prioritization through prediction of future impact on biomedical science: a position paper on inference-analytics.

    PubMed

    Ganapathiraju, Madhavi K; Orii, Naoki

    2013-08-30

    Advances in biotechnology have created "big-data" situations in molecular and cellular biology. Several sophisticated algorithms have been developed that process big data to generate hundreds of biomedical hypotheses (or predictions). The bottleneck to translating this large number of biological hypotheses is that each of them needs to be studied by experimentation for interpreting its functional significance. Even when the predictions are estimated to be very accurate, from a biologist's perspective, the choice of which of these predictions is to be studied further is made based on factors like availability of reagents and resources and the possibility of formulating some reasonable hypothesis about its biological relevance. When viewed from a global perspective, say from that of a federal funding agency, ideally the choice of which prediction should be studied would be made based on which of them can make the most translational impact. We propose that algorithms be developed to identify which of the computationally generated hypotheses have potential for high translational impact; this way, funding agencies and scientific community can invest resources and drive the research based on a global view of biomedical impact without being deterred by local view of feasibility. In short, data-analytic algorithms analyze big-data and generate hypotheses; in contrast, the proposed inference-analytic algorithms analyze these hypotheses and rank them by predicted biological impact. We demonstrate this through the development of an algorithm to predict biomedical impact of protein-protein interactions (PPIs) which is estimated by the number of future publications that cite the paper which originally reported the PPI. This position paper describes a new computational problem that is relevant in the era of big-data and discusses the challenges that exist in studying this problem, highlighting the need for the scientific community to engage in this line of research. The proposed class of algorithms, namely inference-analytic algorithms, is necessary to ensure that resources are invested in translating those computational outcomes that promise maximum biological impact. Application of this concept to predict biomedical impact of PPIs illustrates not only the concept, but also the challenges in designing these algorithms.

  2. The effects of variations in parameters and algorithm choices on calculated radiomics feature values: initial investigations and comparisons to feature variability across CT image acquisition conditions

    NASA Astrophysics Data System (ADS)

    Emaminejad, Nastaran; Wahi-Anwar, Muhammad; Hoffman, John; Kim, Grace H.; Brown, Matthew S.; McNitt-Gray, Michael

    2018-02-01

    Translation of radiomics into clinical practice requires confidence in its interpretations. This may be obtained via understanding and overcoming the limitations in current radiomic approaches. Currently there is a lack of standardization in radiomic feature extraction. In this study we examined a few factors that are potential sources of inconsistency in characterizing lung nodules, such as 1)different choices of parameters and algorithms in feature calculation, 2)two CT image dose levels, 3)different CT reconstruction algorithms (WFBP, denoised WFBP, and Iterative). We investigated the effect of variation of these factors on entropy textural feature of lung nodules. CT images of 19 lung nodules identified from our lung cancer screening program were identified by a CAD tool and contours provided. The radiomics features were extracted by calculating 36 GLCM based and 4 histogram based entropy features in addition to 2 intensity based features. A robustness index was calculated across different image acquisition parameters to illustrate the reproducibility of features. Most GLCM based and all histogram based entropy features were robust across two CT image dose levels. Denoising of images slightly improved robustness of some entropy features at WFBP. Iterative reconstruction resulted in improvement of robustness in a fewer times and caused more variation in entropy feature values and their robustness. Within different choices of parameters and algorithms texture features showed a wide range of variation, as much as 75% for individual nodules. Results indicate the need for harmonization of feature calculations and identification of optimum parameters and algorithms in a radiomics study.

  3. Staging "Swissness": Inter- and Intracultural Theatre Translation

    ERIC Educational Resources Information Center

    Wilkinson, Jane

    2005-01-01

    This paper examines the choice to translate plays from "Hochdeutsch" (the standard form of the German language) into local dialect in German-speaking Switzerland. It first looks at the creative process of translating for the amateur stage and then at the reasons behind the choice to translate. It argues that this choice reflects a desire…

  4. Translation Ambiguity in and out of Context

    ERIC Educational Resources Information Center

    Prior, Anat; Wintner, Shuly; MacWhinney, Brian; Lavie, Alon

    2011-01-01

    We compare translations of single words, made by bilingual speakers in a laboratory setting, with contextualized translation choices of the same items, made by professional translators and extracted from parallel language corpora. The translation choices in both cases show moderate convergence, demonstrating that decontextualized translation…

  5. Control Allocation with Load Balancing

    NASA Technical Reports Server (NTRS)

    Bodson, Marc; Frost, Susan A.

    2009-01-01

    Next generation aircraft with a large number of actuators will require advanced control allocation methods to compute the actuator commands needed to follow desired trajectories while respecting system constraints. Previously, algorithms were proposed to minimize the l1 or l2 norms of the tracking error and of the actuator deflections. The paper discusses the alternative choice of the l(infinity) norm, or sup norm. Minimization of the control effort translates into the minimization of the maximum actuator deflection (min-max optimization). The paper shows how the problem can be solved effectively by converting it into a linear program and solving it using a simplex algorithm. Properties of the algorithm are also investigated through examples. In particular, the min-max criterion results in a type of load balancing, where the load is th desired command and the algorithm balances this load among various actuators. The solution using the l(infinity) norm also results in better robustness to failures and to lower sensitivity to nonlinearities in illustrative examples.

  6. The nondeterministic divide

    NASA Technical Reports Server (NTRS)

    Charlesworth, Arthur

    1990-01-01

    The nondeterministic divide partitions a vector into two non-empty slices by allowing the point of division to be chosen nondeterministically. Support for high-level divide-and-conquer programming provided by the nondeterministic divide is investigated. A diva algorithm is a recursive divide-and-conquer sequential algorithm on one or more vectors of the same range, whose division point for a new pair of recursive calls is chosen nondeterministically before any computation is performed and whose recursive calls are made immediately after the choice of division point; also, access to vector components is only permitted during activations in which the vector parameters have unit length. The notion of diva algorithm is formulated precisely as a diva call, a restricted call on a sequential procedure. Diva calls are proven to be intimately related to associativity. Numerous applications of diva calls are given and strategies are described for translating a diva call into code for a variety of parallel computers. Thus diva algorithms separate logical correctness concerns from implementation concerns.

  7. Resource Balancing Control Allocation

    NASA Technical Reports Server (NTRS)

    Frost, Susan A.; Bodson, Marc

    2010-01-01

    Next generation aircraft with a large number of actuators will require advanced control allocation methods to compute the actuator commands needed to follow desired trajectories while respecting system constraints. Previously, algorithms were proposed to minimize the l1 or l2 norms of the tracking error and of the control effort. The paper discusses the alternative choice of using the l1 norm for minimization of the tracking error and a normalized l(infinity) norm, or sup norm, for minimization of the control effort. The algorithm computes the norm of the actuator deflections scaled by the actuator limits. Minimization of the control effort then translates into the minimization of the maximum actuator deflection as a percentage of its range of motion. The paper shows how the problem can be solved effectively by converting it into a linear program and solving it using a simplex algorithm. Properties of the algorithm are investigated through examples. In particular, the min-max criterion results in a type of resource balancing, where the resources are the control surfaces and the algorithm balances these resources to achieve the desired command. A study of the sensitivity of the algorithms to the data is presented, which shows that the normalized l(infinity) algorithm has the lowest sensitivity, although high sensitivities are observed whenever the limits of performance are reached.

  8. An Automated Parallel Image Registration Technique Based on the Correlation of Wavelet Features

    NASA Technical Reports Server (NTRS)

    LeMoigne, Jacqueline; Campbell, William J.; Cromp, Robert F.; Zukor, Dorothy (Technical Monitor)

    2001-01-01

    With the increasing importance of multiple platform/multiple remote sensing missions, fast and automatic integration of digital data from disparate sources has become critical to the success of these endeavors. Our work utilizes maxima of wavelet coefficients to form the basic features of a correlation-based automatic registration algorithm. Our wavelet-based registration algorithm is tested successfully with data from the National Oceanic and Atmospheric Administration (NOAA) Advanced Very High Resolution Radiometer (AVHRR) and the Landsat/Thematic Mapper(TM), which differ by translation and/or rotation. By the choice of high-frequency wavelet features, this method is similar to an edge-based correlation method, but by exploiting the multi-resolution nature of a wavelet decomposition, our method achieves higher computational speeds for comparable accuracies. This algorithm has been implemented on a Single Instruction Multiple Data (SIMD) massively parallel computer, the MasPar MP-2, as well as on the CrayT3D, the Cray T3E and a Beowulf cluster of Pentium workstations.

  9. Two Different Faces of Cavafy in English: A Corpus-Assisted Approach to Translational Stylistics

    ERIC Educational Resources Information Center

    Pantopoulos, Iraklis

    2012-01-01

    A translator is seen to leave a personal mark on the text through their stylistic choices and the patterns formed by these choices. This article comprises a case study that uses a specialized comparative corpus containing translations of C.P. Cavafy's canon in order to explore the distinctive stylistic features of Rae Dalven and of Edmund…

  10. Choice-Making in Rendering Culture-Bound Elements in Literary Translation: A Case Study on the English Translation Of «????»

    ERIC Educational Resources Information Center

    Meihua, Song

    2014-01-01

    How to render culture-bound elements into a foreign language remains one of the most challenging tasks for all translators, especially, when the source text is a literary one. To retain the aesthetic effects and other stylistic features of importance, some argue that choice can be made from either domestication or foreignization with…

  11. Rapid motif compliance scoring with match weight sets.

    PubMed

    Venezia, D; O'Hara, P J

    1993-02-01

    Most current implementations of motif matching in biological sequences have sacrificed the generality of weight matrix scoring for shorter runtimes. The program MOTIF incorporates a weight matrix and a rapid, backtracking tree-search algorithm to score motif compliance with greatly enhanced performance while placing no constraints on the motif. In addition, any positions within a motif can be marked as 'inviolate', thereby requiring an exact match. MOTIF allows a choice of regular expression formats and can use both motif and sequence libraries as either targets or queries. Nucleic acid sequences can optionally be translated by MOTIF in any frame(s) and used against peptide motifs.

  12. Validation of the translation of an instrument to measure reliability of written information on treatment choices: a study on attention deficit/hyperactivity disorder (ADHD).

    PubMed

    Montoya, A; Llopis, N; Gilaberte, I

    2011-12-01

    DISCERN is an instrument designed to help patients assess the reliability of written information on treatment choices. Originally created in English, there is no validated Spanish version of this instrument. This study seeks to validate the Spanish translation of the DISCERN instrument used as a primary measure on a multicenter study aimed to assess the reliability of web-based information on treatment choices for attention deficit/hyperactivity disorder (ADHD). We used a modified version of a method for validating translated instruments in which the original source-language version is formally compared with the back-translated source-language version. Each item was ranked in terms of comparability of language, similarity of interpretability, and degree of understandability. Responses used Likert scales ranging from 1 to 7, where 1 indicates the best interpretability, language and understandability, and 7 indicates the worst. Assessments were performed by 20 raters fluent in the source language. The Spanish translation of DISCERN, based on ratings of comparability, interpretability and degree of understandability (mean score (SD): 1.8 (1.1), 1.4 (0.9) and 1.6 (1.1), respectively), was considered extremely comparable. All items received a score of less than three, therefore no further revision of the translation was needed. The validation process showed that the quality of DISCERN translation was high, validating the comparable language of the tool translated on assessing written information on treatment choices for ADHD.

  13. Optimizing doped libraries by using genetic algorithms

    NASA Astrophysics Data System (ADS)

    Tomandl, Dirk; Schober, Andreas; Schwienhorst, Andreas

    1997-01-01

    The insertion of random sequences into protein-encoding genes in combination with biologicalselection techniques has become a valuable tool in the design of molecules that have usefuland possibly novel properties. By employing highly effective screening protocols, a functionaland unique structure that had not been anticipated can be distinguished among a hugecollection of inactive molecules that together represent all possible amino acid combinations.This technique is severely limited by its restriction to a library of manageable size. Oneapproach for limiting the size of a mutant library relies on `doping schemes', where subsetsof amino acids are generated that reveal only certain combinations of amino acids in a proteinsequence. Three mononucleotide mixtures for each codon concerned must be designed, suchthat the resulting codons that are assembled during chemical gene synthesis represent thedesired amino acid mixture on the level of the translated protein. In this paper we present adoping algorithm that `reverse translates' a desired mixture of certain amino acids into threemixtures of mononucleotides. The algorithm is designed to optimally bias these mixturestowards the codons of choice. This approach combines a genetic algorithm with localoptimization strategies based on the downhill simplex method. Disparate relativerepresentations of all amino acids (and stop codons) within a target set can be generated.Optional weighing factors are employed to emphasize the frequencies of certain amino acidsand their codon usage, and to compensate for reaction rates of different mononucleotidebuilding blocks (synthons) during chemical DNA synthesis. The effect of statistical errors thataccompany an experimental realization of calculated nucleotide mixtures on the generatedmixtures of amino acids is simulated. These simulations show that the robustness of differentoptima with respect to small deviations from calculated values depends on their concomitantfitness. Furthermore, the calculations probe the fitness landscape locally and allow apreliminary assessment of its structure.

  14. Algorithm of the automated choice of points of the acupuncture for EHF-therapy

    NASA Astrophysics Data System (ADS)

    Lyapina, E. P.; Chesnokov, I. A.; Anisimov, Ya. E.; Bushuev, N. A.; Murashov, E. P.; Eliseev, Yu. Yu.; Syuzanna, H.

    2007-05-01

    Offered algorithm of the automated choice of points of the acupuncture for EHF-therapy. The recipe formed by algorithm of an automated choice of points for acupunctural actions has a recommendational character. Clinical investigations showed that application of the developed algorithm in EHF-therapy allows to normalize energetic state of the meridians and to effectively solve many problems of an organism functioning.

  15. [Translation and cultural adaptation of the questionnaire on the reason for food choices (Food Choice Questionnaire - FCQ) into Portuguese].

    PubMed

    Heitor, Sara Franco Diniz; Estima, Camilla Chermont Prochnik; das Neves, Fabricia Junqueira; de Aguiar, Aline Silva; Castro, Sybelle de Souza; Ferreira, Julia Elba de Souza

    2015-08-01

    The Food Choice Questionnaire (FCQ) assesses the importance that subjects attribute to nine factors related to food choices: health, mood, convenience, sensory appeal, natural content, price, weight control, familiarity and ethical concern. This study sought to assess the applicability of the FCQ in Brazil; it describes the translation and cultural adaptation from English into Portuguese of the FCQ via the following steps: independent translations, consensus, back-translation, evaluation by a committee of experts, semantic validation and pre-test. The pre-test was run with a randomly sampled group of 86 male and female college students from different courses with a median age of 19. Slight differences between the versions were observed and adjustments were made. After minor changes in the translation process, the committee of experts considered that the Brazilian Portuguese version was semantically and conceptually equivalent to the English original. Semantic validation showed that the questionnaire is easily understood. The instrument presented a high degree of internal consistency. The study is the first stage in the process of validating an instrument, which consists of face and content validity. Further stages, already underway, are needed before other researchers can use it.

  16. A fast Fourier transform on multipoles (FFTM) algorithm for solving Helmholtz equation in acoustics analysis.

    PubMed

    Ong, Eng Teo; Lee, Heow Pueh; Lim, Kian Meng

    2004-09-01

    This article presents a fast algorithm for the efficient solution of the Helmholtz equation. The method is based on the translation theory of the multipole expansions. Here, the speedup comes from the convolution nature of the translation operators, which can be evaluated rapidly using fast Fourier transform algorithms. Also, the computations of the translation operators are accelerated by using the recursive formulas developed recently by Gumerov and Duraiswami [SIAM J. Sci. Comput. 25, 1344-1381(2003)]. It is demonstrated that the algorithm can produce good accuracy with a relatively low order of expansion. Efficiency analyses of the algorithm reveal that it has computational complexities of O(Na), where a ranges from 1.05 to 1.24. However, this method requires substantially more memory to store the translation operators as compared to the fast multipole method. Hence, despite its simplicity in implementation, this memory requirement issue may limit the application of this algorithm to solving very large-scale problems.

  17. Ideological Manipulations: The Persian Translation of "The Gadfly"

    ERIC Educational Resources Information Center

    Khadem-Nabi, Mir Mohammad

    2014-01-01

    This paper discusses the lexical choices made by the translator of a novel. The novel, "The Gadfly," has a political significance for the pre-revolutionary Iran. Lexical choices were discussed in light of the methodology provided by Leuven-Zwart who introduces three taxonomies of modulation, modification and mutation for translation…

  18. A set partitioning reformulation for the multiple-choice multidimensional knapsack problem

    NASA Astrophysics Data System (ADS)

    Voß, Stefan; Lalla-Ruiz, Eduardo

    2016-05-01

    The Multiple-choice Multidimensional Knapsack Problem (MMKP) is a well-known ?-hard combinatorial optimization problem that has received a lot of attention from the research community as it can be easily translated to several real-world problems arising in areas such as allocating resources, reliability engineering, cognitive radio networks, cloud computing, etc. In this regard, an exact model that is able to provide high-quality feasible solutions for solving it or being partially included in algorithmic schemes is desirable. The MMKP basically consists of finding a subset of objects that maximizes the total profit while observing some capacity restrictions. In this article a reformulation of the MMKP as a set partitioning problem is proposed to allow for new insights into modelling the MMKP. The computational experimentation provides new insights into the problem itself and shows that the new model is able to improve on the best of the known results for some of the most common benchmark instances.

  19. Development of a translational model to screen medications for cocaine use disorder I: Choice between cocaine and food in rhesus monkeys.

    PubMed

    Johnson, Amy R; Banks, Matthew L; Blough, Bruce E; Lile, Joshua A; Nicholson, Katherine L; Negus, S Stevens

    2016-08-01

    Homologous cocaine self-administration procedures in laboratory animals and humans may facilitate translational research for medications development to treat cocaine dependence. This study, therefore, sought to establish choice between cocaine and an alternative reinforcer in rhesus monkeys responding under a procedure back-translated from previous human studies and homologous to a human laboratory procedure described in a companion paper. Four rhesus monkeys with chronic indwelling intravenous catheters had access to cocaine injections (0, 0.043, 0.14, or 0.43mg/kg/injection) and food (0, 1, 3, or 10 1g banana-flavored food pellets). During daily 5h sessions, a single cocaine dose and a single food-reinforcer magnitude were available in 10 30-min trials. During the initial "sample" trial, the available cocaine and food reinforcer were delivered non-contingently. During each of the subsequent nine "choice" trials, responding could produce either the cocaine or food reinforcer under an independent concurrent progressive-ratio schedule. Preference was governed by the cocaine dose and food-reinforcer magnitude, and increasing cocaine doses produced dose-dependent increases in cocaine choice at all food-reinforcer magnitudes. Effects of the candidate medication lisdexamfetamine (0.32-3.2mg/kg/day) were then examined on choice between 0.14mg/kg/injection cocaine and 10 pellets. Under baseline conditions, this reinforcer pair maintained an average of approximately 6 cocaine and 3 food choices. Lisdexamfetamine dose-dependently decreased cocaine choice in all monkeys, but food choice was not significantly altered. These results support utility of this procedure in rhesus monkeys as one component of a platform for translational research on medications development to treat cocaine use disorder. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  20. Real-time estimation of prostate tumor rotation and translation with a kV imaging system based on an iterative closest point algorithm.

    PubMed

    Tehrani, Joubin Nasehi; O'Brien, Ricky T; Poulsen, Per Rugaard; Keall, Paul

    2013-12-07

    Previous studies have shown that during cancer radiotherapy a small translation or rotation of the tumor can lead to errors in dose delivery. Current best practice in radiotherapy accounts for tumor translations, but is unable to address rotation due to a lack of a reliable real-time estimate. We have developed a method based on the iterative closest point (ICP) algorithm that can compute rotation from kilovoltage x-ray images acquired during radiation treatment delivery. A total of 11 748 kilovoltage (kV) images acquired from ten patients (one fraction for each patient) were used to evaluate our tumor rotation algorithm. For each kV image, the three dimensional coordinates of three fiducial markers inside the prostate were calculated. The three dimensional coordinates were used as input to the ICP algorithm to calculate the real-time tumor rotation and translation around three axes. The results show that the root mean square error was improved for real-time calculation of tumor displacement from a mean of 0.97 mm with the stand alone translation to a mean of 0.16 mm by adding real-time rotation and translation displacement with the ICP algorithm. The standard deviation (SD) of rotation for the ten patients was 2.3°, 0.89° and 0.72° for rotation around the right-left (RL), anterior-posterior (AP) and superior-inferior (SI) directions respectively. The correlation between all six degrees of freedom showed that the highest correlation belonged to the AP and SI translation with a correlation of 0.67. The second highest correlation in our study was between the rotation around RL and rotation around AP, with a correlation of -0.33. Our real-time algorithm for calculation of rotation also confirms previous studies that have shown the maximum SD belongs to AP translation and rotation around RL. ICP is a reliable and fast algorithm for estimating real-time tumor rotation which could create a pathway to investigational clinical treatment studies requiring real-time measurement and adaptation to tumor rotation.

  1. Real-time estimation of prostate tumor rotation and translation with a kV imaging system based on an iterative closest point algorithm

    NASA Astrophysics Data System (ADS)

    Nasehi Tehrani, Joubin; O'Brien, Ricky T.; Rugaard Poulsen, Per; Keall, Paul

    2013-12-01

    Previous studies have shown that during cancer radiotherapy a small translation or rotation of the tumor can lead to errors in dose delivery. Current best practice in radiotherapy accounts for tumor translations, but is unable to address rotation due to a lack of a reliable real-time estimate. We have developed a method based on the iterative closest point (ICP) algorithm that can compute rotation from kilovoltage x-ray images acquired during radiation treatment delivery. A total of 11 748 kilovoltage (kV) images acquired from ten patients (one fraction for each patient) were used to evaluate our tumor rotation algorithm. For each kV image, the three dimensional coordinates of three fiducial markers inside the prostate were calculated. The three dimensional coordinates were used as input to the ICP algorithm to calculate the real-time tumor rotation and translation around three axes. The results show that the root mean square error was improved for real-time calculation of tumor displacement from a mean of 0.97 mm with the stand alone translation to a mean of 0.16 mm by adding real-time rotation and translation displacement with the ICP algorithm. The standard deviation (SD) of rotation for the ten patients was 2.3°, 0.89° and 0.72° for rotation around the right-left (RL), anterior-posterior (AP) and superior-inferior (SI) directions respectively. The correlation between all six degrees of freedom showed that the highest correlation belonged to the AP and SI translation with a correlation of 0.67. The second highest correlation in our study was between the rotation around RL and rotation around AP, with a correlation of -0.33. Our real-time algorithm for calculation of rotation also confirms previous studies that have shown the maximum SD belongs to AP translation and rotation around RL. ICP is a reliable and fast algorithm for estimating real-time tumor rotation which could create a pathway to investigational clinical treatment studies requiring real-time measurement and adaptation to tumor rotation.

  2. Choice as a Global Language in Local Practice: A Mixed Model of School Choice in Taiwan

    ERIC Educational Resources Information Center

    Mao, Chin-Ju

    2015-01-01

    This paper uses school choice policy as an example to demonstrate how local actors adopt, mediate, translate, and reformulate "choice" as neo-liberal rhetoric informing education reform. Complex processes exist between global policy about school choice and the local practice of school choice. Based on the theoretical sensibility of…

  3. Inverse Translation in China: A Necessary Choice or a Necessary Evil

    ERIC Educational Resources Information Center

    Shi, Jiasheng

    2013-01-01

    Inverse translation has long been seen in the negative light in modern translation studies, and has thus been relegated to a sort of second class endeavour. Based on a brief comparative study of English translations of Wenxin Diaolong, a Chinese literary classic, this paper argues that inverse translation is as legitimate and feasible as direct…

  4. Improved ALE mesh velocities for complex flows

    DOE PAGES

    Bakosi, Jozsef; Waltz, Jacob I.; Morgan, Nathaniel Ray

    2017-05-31

    A key choice in the development of arbitrary Lagrangian-Eulerian solution algorithms is how to move the computational mesh. The most common approaches are smoothing and relaxation techniques, or to compute a mesh velocity field that produces smooth mesh displacements. We present a method in which the mesh velocity is specified by the irrotational component of the fluid velocity as computed from a Helmholtz decomposition, and excess compression of mesh cells is treated through a noniterative, local spring-force model. This approach allows distinct and separate control over rotational and translational modes. In conclusion, the utility of the new mesh motion algorithmmore » is demonstrated on a number of 3D test problems, including problems that involve both shocks and significant amounts of vorticity.« less

  5. Comparing the incomparable? A systematic review of competing techniques for converting descriptive measures of health status into QALY-weights.

    PubMed

    Mortimer, Duncan; Segal, Leonie

    2008-01-01

    Algorithms for converting descriptive measures of health status into quality-adjusted life year (QALY)--weights are now widely available, and their application in economic evaluation is increasingly commonplace. The objective of this study is to describe and compare existing conversion algorithms and to highlight issues bearing on the derivation and interpretation of the QALY-weights so obtained. Systematic review of algorithms for converting descriptive measures of health status into QALY-weights. The review identified a substantial body of literature comprising 46 derivation studies and 16 studies that provided evidence or commentary on the validity of conversion algorithms. Conversion algorithms were derived using 1 of 4 techniques: 1) transfer to utility regression, 2) response mapping, 3) effect size translation, and 4) "revaluing" outcome measures using preference-based scaling techniques. Although these techniques differ in their methodological/theoretical tradition, data requirements, and ease of derivation and application, the available evidence suggests that the sensitivity and validity of derived QALY-weights may be more dependent on the coverage and sensitivity of measures and the disease area/patient group under evaluation than on the technique used in derivation. Despite the recent proliferation of conversion algorithms, a number of questions bearing on the derivation and interpretation of derived QALY-weights remain unresolved. These unresolved issues suggest directions for future research in this area. In the meantime, analysts seeking guidance in selecting derived QALY-weights should consider the validity and feasibility of each conversion algorithm in the disease area and patient group under evaluation rather than restricting their choice to weights from a particular derivation technique.

  6. Towards Symbolic Model Checking for Multi-Agent Systems via OBDDs

    NASA Technical Reports Server (NTRS)

    Raimondi, Franco; Lomunscio, Alessio

    2004-01-01

    We present an algorithm for model checking temporal-epistemic properties of multi-agent systems, expressed in the formalism of interpreted systems. We first introduce a technique for the translation of interpreted systems into boolean formulae, and then present a model-checking algorithm based on this translation. The algorithm is based on OBDD's, as they offer a compact and efficient representation for boolean formulae.

  7. Translational bioinformatics: linking the molecular world to the clinical world.

    PubMed

    Altman, R B

    2012-06-01

    Translational bioinformatics represents the union of translational medicine and bioinformatics. Translational medicine moves basic biological discoveries from the research bench into the patient-care setting and uses clinical observations to inform basic biology. It focuses on patient care, including the creation of new diagnostics, prognostics, prevention strategies, and therapies based on biological discoveries. Bioinformatics involves algorithms to represent, store, and analyze basic biological data, including DNA sequence, RNA expression, and protein and small-molecule abundance within cells. Translational bioinformatics spans these two fields; it involves the development of algorithms to analyze basic molecular and cellular data with an explicit goal of affecting clinical care.

  8. International Quidditch: Using Cultural Translation Exercises to Teach Word Choice and Audience

    ERIC Educational Resources Information Center

    Ruwe, Donelle

    2013-01-01

    The American edition of "Harry Potter and the Sorcerer's Stone" has significant changes from the original British version, and every word of a Harry Potter book in translation derives from a translator's decision-making process. Focusing students on British-to-American cultural translation problems in the Harry Potter series encourages…

  9. More Heads Are Better than One: Peer Editing in a Translation Classroom of EFL Learners

    ERIC Educational Resources Information Center

    Insai, Sakolkarn; Poonlarp, Tongtip

    2017-01-01

    During the process of translation, students need to learn how to detect and correct errors in their translation drafts, and collaboration among themselves is one possible way to do this. As Pym (2003) has explained, translation is a process of problem-solving; translators must be able to decide which choices are more or less appropriate for the…

  10. ZettaBricks: A Language Compiler and Runtime System for Anyscale Computing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Amarasinghe, Saman

    This grant supported the ZettaBricks and OpenTuner projects. ZettaBricks is a new implicitly parallel language and compiler where defining multiple implementations of multiple algorithms to solve a problem is the natural way of programming. ZettaBricks makes algorithmic choice a first class construct of the language. Choices are provided in a way that also allows our compiler to tune at a finer granularity. The ZettaBricks compiler autotunes programs by making both fine-grained as well as algorithmic choices. Choices also include different automatic parallelization techniques, data distributions, algorithmic parameters, transformations, and blocking. Additionally, ZettaBricks introduces novel techniques to autotune algorithms for differentmore » convergence criteria. When choosing between various direct and iterative methods, the ZettaBricks compiler is able to tune a program in such a way that delivers near-optimal efficiency for any desired level of accuracy. The compiler has the flexibility of utilizing different convergence criteria for the various components within a single algorithm, providing the user with accuracy choice alongside algorithmic choice. OpenTuner is a generalization of the experience gained in building an autotuner for ZettaBricks. OpenTuner is a new open source framework for building domain-specific multi-objective program autotuners. OpenTuner supports fully-customizable configuration representations, an extensible technique representation to allow for domain-specific techniques, and an easy to use interface for communicating with the program to be autotuned. A key capability inside OpenTuner is the use of ensembles of disparate search techniques simultaneously; techniques that perform well will dynamically be allocated a larger proportion of tests.« less

  11. Mental Computation or Standard Algorithm? Children's Strategy Choices on Multi-Digit Subtractions

    ERIC Educational Resources Information Center

    Torbeyns, Joke; Verschaffel, Lieven

    2016-01-01

    This study analyzed children's use of mental computation strategies and the standard algorithm on multi-digit subtractions. Fifty-eight Flemish 4th graders of varying mathematical achievement level were individually offered subtractions that either stimulated the use of mental computation strategies or the standard algorithm in one choice and two…

  12. Synthesis of Greedy Algorithms Using Dominance Relations

    NASA Technical Reports Server (NTRS)

    Nedunuri, Srinivas; Smith, Douglas R.; Cook, William R.

    2010-01-01

    Greedy algorithms exploit problem structure and constraints to achieve linear-time performance. Yet there is still no completely satisfactory way of constructing greedy algorithms. For example, the Greedy Algorithm of Edmonds depends upon translating a problem into an algebraic structure called a matroid, but the existence of such a translation can be as hard to determine as the existence of a greedy algorithm itself. An alternative characterization of greedy algorithms is in terms of dominance relations, a well-known algorithmic technique used to prune search spaces. We demonstrate a process by which dominance relations can be methodically derived for a number of greedy algorithms, including activity selection, and prefix-free codes. By incorporating our approach into an existing framework for algorithm synthesis, we demonstrate that it could be the basis for an effective engineering method for greedy algorithms. We also compare our approach with other characterizations of greedy algorithms.

  13. Algorithm for Surface of Translation Attached Radiators (A-STAR). Volume 2. Users manual

    NASA Astrophysics Data System (ADS)

    Medgyesimitschang, L. N.; Putnam, J. M.

    1982-05-01

    A hierarchy of computer programs implementing the method of moments for bodies of translation (MM/BOT) is described. The algorithm treats the far-field radiation from off-surface and aperture antennas on finite-length open or closed bodies of arbitrary cross section. The near fields and antenna coupling on such bodies are computed. The theoretical development underlying the algorithm is described in Volume 1 of this report.

  14. Factors Influencing Teaching Choice in Turkey

    ERIC Educational Resources Information Center

    Kilinc, Ahmet; Watt, Helen M. G.; Richardson, Paul W.

    2012-01-01

    Why choose to become a teacher in Turkey? The authors examined motivations and perceptions among preservice teachers (N = 1577) encompassing early childhood, primary and secondary education. The Factors Influencing Teaching Choice (FIT-Choice) instrument was translated into Turkish and its construct validity and reliability assessed. Altruistic…

  15. LITERARY TRANSLATION IN THE CLASSROOM.

    ERIC Educational Resources Information Center

    BROWN, CALVIN S.

    LITERARY TRANSLATION IN THE CLASSROOM CAN PROVOKE DISCUSSION WITH GREAT PEDAGOGICAL VALUE EVEN THOUGH A DEFINITIVE TRANSLATION IS NOT THE GOAL. A VERSE FROM HORACE AND ONE FROM DU BELLAY ILLUSTRATE THE POSSIBLE CHOICES OF VOCABULARY AND PHRASING WHICH EVEN TWO RELATIVELY STRAIGHTFORWARD LINES PRESENT. AN ALERT TEACHER CAN BRING OUT SUBTLETIES OF…

  16. Cross-cultural adaptation and translation of a quality of life tool for new mothers: a methodological and experiential account from six countries.

    PubMed

    Symon, Andrew; Nagpal, Jitender; Maniecka-Bryła, Irena; Nowakowska-Głąb, Agata; Rashidian, Arash; Khabiri, Roghayeh; Mendes, Isabel; Pinheiro, Ana Karina Bezerra; de Oliveira, Mirna Fontenele; Wu, Liping

    2013-04-01

    To examine the challenges and solutions encountered in the translation and cross-cultural adaptation of an English language quality of life tool in India, China, Iran, Portugal, Brazil, and Poland. Those embarking on research involving translation and cross-cultural adaptation must address certain practical and conceptual issues. These include instrument choice, linguistic factors, and cultural or philosophical differences, which may render an instrument inappropriate, even when expertly translated. Publication bias arises when studies encountering difficulties do not admit to these, or are not published at all. As an educative guide to the potential pitfalls involved in the cross-cultural adaptation process, this article reports the conceptual, linguistic, and methodological experiences of researchers in six countries, who translated and adapted the Mother-Generated Index, a quality of life tool originally developed in English. Principal investigator experience from six stand-alone studies (two published) ranging from postgraduate research to citywide surveys. DISCUSSION/IMPLICATIONS FOR NURSING: This analysis of a series of stand-alone cross-cultural studies provides lessons about how conceptual issues, such as the uniqueness of perceived quality of life and the experience of new motherhood, can be addressed. This original international approach highlights practical lessons relating to instrument choice, and the resources available to researchers with different levels of experience. Although researchers may be confident of effective translation, conceptual and practical difficulties may be more problematic. Instrument choice is crucial. Researchers must negotiate adequate resources for cross-cultural research, including time, translation facilities, and expert advice about conceptual issues. © 2012 Blackwell Publishing Ltd.

  17. Preprocessing and meta-classification for brain-computer interfaces.

    PubMed

    Hammon, Paul S; de Sa, Virginia R

    2007-03-01

    A brain-computer interface (BCI) is a system which allows direct translation of brain states into actions, bypassing the usual muscular pathways. A BCI system works by extracting user brain signals, applying machine learning algorithms to classify the user's brain state, and performing a computer-controlled action. Our goal is to improve brain state classification. Perhaps the most obvious way to improve classification performance is the selection of an advanced learning algorithm. However, it is now well known in the BCI community that careful selection of preprocessing steps is crucial to the success of any classification scheme. Furthermore, recent work indicates that combining the output of multiple classifiers (meta-classification) leads to improved classification rates relative to single classifiers (Dornhege et al., 2004). In this paper, we develop an automated approach which systematically analyzes the relative contributions of different preprocessing and meta-classification approaches. We apply this procedure to three data sets drawn from BCI Competition 2003 (Blankertz et al., 2004) and BCI Competition III (Blankertz et al., 2006), each of which exhibit very different characteristics. Our final classification results compare favorably with those from past BCI competitions. Additionally, we analyze the relative contributions of individual preprocessing and meta-classification choices and discuss which types of BCI data benefit most from specific algorithms.

  18. Moats and Drawbridges: An Isolation Primitive for Reconfigurable Hardware Based Systems

    DTIC Science & Technology

    2007-05-01

    these systems, and after being run through an optimizing CAD tool the resulting circuit is a single entangled mess of gates and wires. To prevent the...translates MATLAB [48] algorithms into HDL, logic synthesis translates this HDL into a netlist, a synthesis tool uses a place-and-route algorithm to...Core Soft Core µ Soft P Core µP Core Hard Soft Algorithms MATLAB gcc ExecutableC Code HDL C Code Bitstream Place and Route NetlistLogic Synthesis EDK µP

  19. Fast decoder for local quantum codes using Groebner basis

    NASA Astrophysics Data System (ADS)

    Haah, Jeongwan

    2013-03-01

    Based on arXiv:1204.1063. A local translation-invariant quantum code has a description in terms of Laurent polynomials. As an application of this observation, we present a fast decoding algorithm for translation-invariant local quantum codes in any spatial dimensions using the straightforward division algorithm for multivariate polynomials. The running time is O (n log n) on average, or O (n2 log n) on worst cases, where n is the number of physical qubits. The algorithm improves a subroutine of the renormalization-group decoder by Bravyi and Haah (arXiv:1112.3252) in the translation-invariant case. This work is supported in part by the Insitute for Quantum Information and Matter, an NSF Physics Frontier Center, and the Korea Foundation for Advanced Studies.

  20. Image registration under translation and rotation in two-dimensional planes using Fourier slice theorem.

    PubMed

    Pohit, M; Sharma, J

    2015-05-10

    Image recognition in the presence of both rotation and translation is a longstanding problem in correlation pattern recognition. Use of log polar transform gives a solution to this problem, but at a cost of losing the vital phase information from the image. The main objective of this paper is to develop an algorithm based on Fourier slice theorem for measuring the simultaneous rotation and translation of an object in a 2D plane. The algorithm is applicable for any arbitrary object shift for full 180° rotation.

  1. mRNA translation and protein synthesis: an analysis of different modelling methodologies and a new PBN based approach

    PubMed Central

    2014-01-01

    Background mRNA translation involves simultaneous movement of multiple ribosomes on the mRNA and is also subject to regulatory mechanisms at different stages. Translation can be described by various codon-based models, including ODE, TASEP, and Petri net models. Although such models have been extensively used, the overlap and differences between these models and the implications of the assumptions of each model has not been systematically elucidated. The selection of the most appropriate modelling framework, and the most appropriate way to develop coarse-grained/fine-grained models in different contexts is not clear. Results We systematically analyze and compare how different modelling methodologies can be used to describe translation. We define various statistically equivalent codon-based simulation algorithms and analyze the importance of the update rule in determining the steady state, an aspect often neglected. Then a novel probabilistic Boolean network (PBN) model is proposed for modelling translation, which enjoys an exact numerical solution. This solution matches those of numerical simulation from other methods and acts as a complementary tool to analytical approximations and simulations. The advantages and limitations of various codon-based models are compared, and illustrated by examples with real biological complexities such as slow codons, premature termination and feedback regulation. Our studies reveal that while different models gives broadly similiar trends in many cases, important differences also arise and can be clearly seen, in the dependence of the translation rate on different parameters. Furthermore, the update rule affects the steady state solution. Conclusions The codon-based models are based on different levels of abstraction. Our analysis suggests that a multiple model approach to understanding translation allows one to ascertain which aspects of the conclusions are robust with respect to the choice of modelling methodology, and when (and why) important differences may arise. This approach also allows for an optimal use of analysis tools, which is especially important when additional complexities or regulatory mechanisms are included. This approach can provide a robust platform for dissecting translation, and results in an improved predictive framework for applications in systems and synthetic biology. PMID:24576337

  2. Probability of coding of a DNA sequence: an algorithm to predict translated reading frames from their thermodynamic characteristics.

    PubMed Central

    Tramontano, A; Macchiato, M F

    1986-01-01

    An algorithm to determine the probability that a reading frame codifies for a protein is presented. It is based on the results of our previous studies on the thermodynamic characteristics of a translated reading frame. We also develop a prediction procedure to distinguish between coding and non-coding reading frames. The procedure is based on the characteristics of the putative product of the DNA sequence and not on periodicity characteristics of the sequence, so the prediction is not biased by the presence of overlapping translated reading frames or by the presence of translated reading frames on the complementary DNA strand. PMID:3753761

  3. Translation and integration of numerical atomic orbitals in linear molecules

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Heinäsmäki, Sami, E-mail: sami.heinasmaki@gmail.com

    2014-02-14

    We present algorithms for translation and integration of atomic orbitals for LCAO calculations in linear molecules. The method applies to arbitrary radial functions given on a numerical mesh. The algorithms are based on pseudospectral differentiation matrices in two dimensions and the corresponding two-dimensional Gaussian quadratures. As a result, multicenter overlap and Coulomb integrals can be evaluated effectively.

  4. BPF-type region-of-interest reconstruction for parallel translational computed tomography.

    PubMed

    Wu, Weiwen; Yu, Hengyong; Wang, Shaoyu; Liu, Fenglin

    2017-01-01

    The objective of this study is to present and test a new ultra-low-cost linear scan based tomography architecture. Similar to linear tomosynthesis, the source and detector are translated in opposite directions and the data acquisition system targets on a region-of-interest (ROI) to acquire data for image reconstruction. This kind of tomographic architecture was named parallel translational computed tomography (PTCT). In previous studies, filtered backprojection (FBP)-type algorithms were developed to reconstruct images from PTCT. However, the reconstructed ROI images from truncated projections have severe truncation artefact. In order to overcome this limitation, we in this study proposed two backprojection filtering (BPF)-type algorithms named MP-BPF and MZ-BPF to reconstruct ROI images from truncated PTCT data. A weight function is constructed to deal with data redundancy for multi-linear translations modes. Extensive numerical simulations are performed to evaluate the proposed MP-BPF and MZ-BPF algorithms for PTCT in fan-beam geometry. Qualitative and quantitative results demonstrate that the proposed BPF-type algorithms cannot only more accurately reconstruct ROI images from truncated projections but also generate high-quality images for the entire image support in some circumstances.

  5. "O tradutor e os seus trebelhos": Fred P. Ellison and Translation

    ERIC Educational Resources Information Center

    Frizzi, Adria

    2016-01-01

    In this article, Adria Frizzi provides a brief overview of Fred Ellison's work in the field of literary translation. Frizzi does so by outlining Ellison's trajectory as a translator--highlighting how his choices in this area of his scholarship fit into a broader, multipronged approach to the study and diffusion of the language, culture, and…

  6. Demonstration of accuracy and clinical versatility of mutual information for automatic multimodality image fusion using affine and thin-plate spline warped geometric deformations.

    PubMed

    Meyer, C R; Boes, J L; Kim, B; Bland, P H; Zasadny, K R; Kison, P V; Koral, K; Frey, K A; Wahl, R L

    1997-04-01

    This paper applies and evaluates an automatic mutual information-based registration algorithm across a broad spectrum of multimodal volume data sets. The algorithm requires little or no pre-processing, minimal user input and easily implements either affine, i.e. linear or thin-plate spline (TPS) warped registrations. We have evaluated the algorithm in phantom studies as well as in selected cases where few other algorithms could perform as well, if at all, to demonstrate the value of this new method. Pairs of multimodal gray-scale volume data sets were registered by iteratively changing registration parameters to maximize mutual information. Quantitative registration errors were assessed in registrations of a thorax phantom using PET/CT and in the National Library of Medicine's Visible Male using MRI T2-/T1-weighted acquisitions. Registrations of diverse clinical data sets were demonstrated including rotate-translate mapping of PET/MRI brain scans with significant missing data, full affine mapping of thoracic PET/CT and rotate-translate mapping of abdominal SPECT/CT. A five-point thin-plate spline (TPS) warped registration of thoracic PET/CT is also demonstrated. The registration algorithm converged in times ranging between 3.5 and 31 min for affine clinical registrations and 57 min for TPS warping. Mean error vector lengths for rotate-translate registrations were measured to be subvoxel in phantoms. More importantly the rotate-translate algorithm performs well even with missing data. The demonstrated clinical fusions are qualitatively excellent at all levels. We conclude that such automatic, rapid, robust algorithms significantly increase the likelihood that multimodality registrations will be routinely used to aid clinical diagnoses and post-therapeutic assessment in the near future.

  7. Translation of P = kT into a Pictorial External Representation by High School Seniors

    ERIC Educational Resources Information Center

    Matijaševic, Igor; Korolija, Jasminka N.; Mandic, Ljuba M.

    2016-01-01

    This paper describes the results achieved by high school seniors on an item which involves translation of the equation P = kT into a corresponding pictorial external representation. The majority of students (the classes of 2011, 2012 and 2013) did not give the correct answer to the multiple choice part of the translation item. They chose pictorial…

  8. "Foreignizing" or "Domesticating" the Ideology of Parental Control in Translating Stories for Children: Insights from Contrastive Discourse Analysis

    ERIC Educational Resources Information Center

    Pounds, Gabrina

    2011-01-01

    Translating for children is increasingly being recognized as a challenge worthy of as much attention as translating for adults. One of the key issues debated in this domain is the choice between "foreignizing" and "domesticating" strategies in relation to the pedagogic or, more generally, ideology forming or ideology-reflecting potential of…

  9. Undesirable Choice Biases with Small Differences in the Spatial Structure of Chance Stimulus Sequences.

    PubMed

    Herrera, David; Treviño, Mario

    2015-01-01

    In two-alternative discrimination tasks, experimenters usually randomize the location of the rewarded stimulus so that systematic behavior with respect to irrelevant stimuli can only produce chance performance on the learning curves. One way to achieve this is to use random numbers derived from a discrete binomial distribution to create a 'full random training schedule' (FRS). When using FRS, however, sporadic but long laterally-biased training sequences occur by chance and such 'input biases' are thought to promote the generation of laterally-biased choices (i.e., 'output biases'). As an alternative, a 'Gellerman-like training schedule' (GLS) can be used. It removes most input biases by prohibiting the reward from appearing on the same location for more than three consecutive trials. The sequence of past rewards obtained from choosing a particular discriminative stimulus influences the probability of choosing that same stimulus on subsequent trials. Assuming that the long-term average ratio of choices matches the long-term average ratio of reinforcers, we hypothesized that a reduced amount of input biases in GLS compared to FRS should lead to a reduced production of output biases. We compared the choice patterns produced by a 'Rational Decision Maker' (RDM) in response to computer-generated FRS and GLS training sequences. To create a virtual RDM, we implemented an algorithm that generated choices based on past rewards. Our simulations revealed that, although the GLS presented fewer input biases than the FRS, the virtual RDM produced more output biases with GLS than with FRS under a variety of test conditions. Our results reveal that the statistical and temporal properties of training sequences interacted with the RDM to influence the production of output biases. Thus, discrete changes in the training paradigms did not translate linearly into modifications in the pattern of choices generated by a RDM. Virtual RDMs could be further employed to guide the selection of proper training schedules for perceptual decision-making studies.

  10. Career Choice Attitudes of Jordanian Adolescents Related to Educational Level of Parents.

    ERIC Educational Resources Information Center

    Damin, Monther Abdel Hameed; Hodinko, Bernard A.

    A study examined how the educational level of parents related to the career-choice attitudes of adolescents. The Career Maturity Inventory Attitude Scale, Form A-1, was translated into Arabic and used to assess the attitudes and feelings about making a career choice and entering the working world of a sample of 841 students enrolled in 28 high…

  11. On-Demand Associative Cross-Language Information Retrieval

    NASA Astrophysics Data System (ADS)

    Geraldo, André Pinto; Moreira, Viviane P.; Gonçalves, Marcos A.

    This paper proposes the use of algorithms for mining association rules as an approach for Cross-Language Information Retrieval. These algorithms have been widely used to analyse market basket data. The idea is to map the problem of finding associations between sales items to the problem of finding term translations over a parallel corpus. The proposal was validated by means of experiments using queries in two distinct languages: Portuguese and Finnish to retrieve documents in English. The results show that the performance of our proposed approach is comparable to the performance of the monolingual baseline and to query translation via machine translation, even though these systems employ more complex Natural Language Processing techniques. The combination between machine translation and our approach yielded the best results, even outperforming the monolingual baseline.

  12. A Novel Latin Hypercube Algorithm via Translational Propagation

    PubMed Central

    Pan, Guang; Ye, Pengcheng

    2014-01-01

    Metamodels have been widely used in engineering design to facilitate analysis and optimization of complex systems that involve computationally expensive simulation programs. The accuracy of metamodels is directly related to the experimental designs used. Optimal Latin hypercube designs are frequently used and have been shown to have good space-filling and projective properties. However, the high cost in constructing them limits their use. In this paper, a methodology for creating novel Latin hypercube designs via translational propagation and successive local enumeration algorithm (TPSLE) is developed without using formal optimization. TPSLE algorithm is based on the inspiration that a near optimal Latin Hypercube design can be constructed by a simple initial block with a few points generated by algorithm SLE as a building block. In fact, TPSLE algorithm offers a balanced trade-off between the efficiency and sampling performance. The proposed algorithm is compared to two existing algorithms and is found to be much more efficient in terms of the computation time and has acceptable space-filling and projective properties. PMID:25276844

  13. Chunk Alignment for Corpus-Based Machine Translation

    ERIC Educational Resources Information Center

    Kim, Jae Dong

    2011-01-01

    Since sub-sentential alignment is critically important to the translation quality of an Example-Based Machine Translation (EBMT) system, which operates by finding and combining phrase-level matches against the training examples, we developed a new alignment algorithm for the purpose of improving the EBMT system's performance. This new…

  14. Pick-N Multiple Choice-Exams: A Comparison of Scoring Algorithms

    ERIC Educational Resources Information Center

    Bauer, Daniel; Holzer, Matthias; Kopp, Veronika; Fischer, Martin R.

    2011-01-01

    To compare different scoring algorithms for Pick-N multiple correct answer multiple-choice (MC) exams regarding test reliability, student performance, total item discrimination and item difficulty. Data from six 3rd year medical students' end of term exams in internal medicine from 2005 to 2008 at Munich University were analysed (1,255 students,…

  15. Robotic real-time translational and rotational head motion correction during frameless stereotactic radiosurgery

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Xinmin; Belcher, Andrew H.; Grelewicz, Zachary

    Purpose: To develop a control system to correct both translational and rotational head motion deviations in real-time during frameless stereotactic radiosurgery (SRS). Methods: A novel feedback control with a feed-forward algorithm was utilized to correct for the coupling of translation and rotation present in serial kinematic robotic systems. Input parameters for the algorithm include the real-time 6DOF target position, the frame pitch pivot point to target distance constant, and the translational and angular Linac beam off (gating) tolerance constants for patient safety. Testing of the algorithm was done using a 4D (XY Z + pitch) robotic stage, an infrared headmore » position sensing unit and a control computer. The measured head position signal was processed and a resulting command was sent to the interface of a four-axis motor controller, through which four stepper motors were driven to perform motion compensation. Results: The control of the translation of a brain target was decoupled with the control of the rotation. For a phantom study, the corrected position was within a translational displacement of 0.35 mm and a pitch displacement of 0.15° 100% of the time. For a volunteer study, the corrected position was within displacements of 0.4 mm and 0.2° over 98.5% of the time, while it was 10.7% without correction. Conclusions: The authors report a control design approach for both translational and rotational head motion correction. The experiments demonstrated that control performance of the 4D robotic stage meets the submillimeter and subdegree accuracy required by SRS.« less

  16. Interconversion algorithm between mechanical and dielectric relaxation measurements for acetate of cis- and trans-2-phenyl-5-hydroxymethyl-1,3-dioxane.

    PubMed

    Garcia-Bernabé, A; Lidón-Roger, J V; Sanchis, M J; Díaz-Calleja, R; del Castillo, L F

    2015-10-01

    The dielectric and mechanical spectroscopies of acetate of cis- and trans-2-phenyl-5-hydroxymethyl-1,3-dioxane are reported in the frequency domain from 10(-2) to 10(6)Hz. This ester has been selected in this study for its predominant α relaxation with regard to the β relaxation, which can be neglected. This study consists of determining an interconversion algorithm between dielectric and mechanical measurements, given by using a relation between rotational and translational complex viscosities. These important viscosities were obtained from measures of the dielectric complex permittivity and by dynamic mechanical analysis, respectively. The definitions of rotational and translational viscosities were evaluated by means of fractional calculus, by using the fit parameters of the Havriliak-Negami empirical model obtained in the dielectric and mechanical characterization of the α relaxation. This interconversion algorithm is a generalization of the break of the Stokes-Einstein-Debye relationship. It uses a power law with an exponent defined as the shape factor, which modifies the translational viscosity. Two others factors are introduced for the interconversion, a shift factor, which displaces the translational viscosity in the frequency domain, and a scale factor, which makes equal values of the two viscosities. In this paper, the shape factor has been identified as the relation between the slopes of the moduli of the complex viscosities at higher frequency. This is interpreted as the degree of kinetic coupling between the molecular rotation and translational movements. Alternatively, another interconversion algorithm has been expressed by means of dielectric and mechanical moduli.

  17. Classifying publications from the clinical and translational science award program along the translational research spectrum: a machine learning approach.

    PubMed

    Surkis, Alisa; Hogle, Janice A; DiazGranados, Deborah; Hunt, Joe D; Mazmanian, Paul E; Connors, Emily; Westaby, Kate; Whipple, Elizabeth C; Adamus, Trisha; Mueller, Meridith; Aphinyanaphongs, Yindalon

    2016-08-05

    Translational research is a key area of focus of the National Institutes of Health (NIH), as demonstrated by the substantial investment in the Clinical and Translational Science Award (CTSA) program. The goal of the CTSA program is to accelerate the translation of discoveries from the bench to the bedside and into communities. Different classification systems have been used to capture the spectrum of basic to clinical to population health research, with substantial differences in the number of categories and their definitions. Evaluation of the effectiveness of the CTSA program and of translational research in general is hampered by the lack of rigor in these definitions and their application. This study adds rigor to the classification process by creating a checklist to evaluate publications across the translational spectrum and operationalizes these classifications by building machine learning-based text classifiers to categorize these publications. Based on collaboratively developed definitions, we created a detailed checklist for categories along the translational spectrum from T0 to T4. We applied the checklist to CTSA-linked publications to construct a set of coded publications for use in training machine learning-based text classifiers to classify publications within these categories. The training sets combined T1/T2 and T3/T4 categories due to low frequency of these publication types compared to the frequency of T0 publications. We then compared classifier performance across different algorithms and feature sets and applied the classifiers to all publications in PubMed indexed to CTSA grants. To validate the algorithm, we manually classified the articles with the top 100 scores from each classifier. The definitions and checklist facilitated classification and resulted in good inter-rater reliability for coding publications for the training set. Very good performance was achieved for the classifiers as represented by the area under the receiver operating curves (AUC), with an AUC of 0.94 for the T0 classifier, 0.84 for T1/T2, and 0.92 for T3/T4. The combination of definitions agreed upon by five CTSA hubs, a checklist that facilitates more uniform definition interpretation, and algorithms that perform well in classifying publications along the translational spectrum provide a basis for establishing and applying uniform definitions of translational research categories. The classification algorithms allow publication analyses that would not be feasible with manual classification, such as assessing the distribution and trends of publications across the CTSA network and comparing the categories of publications and their citations to assess knowledge transfer across the translational research spectrum.

  18. Using a Portfolio of Algorithms for Planning and Scheduling

    NASA Technical Reports Server (NTRS)

    Sherwood, Robert; Knight, Russell; Rabideau, Gregg; Chien, Steve; Tran, Daniel; Engelhardt, Barbara

    2003-01-01

    The Automated Scheduling and Planning Environment (ASPEN) software system, aspects of which have been reported in several previous NASA Tech Briefs articles, includes a subsystem that utilizes a portfolio of heuristic algorithms that work synergistically to solve problems. The nature of the synergy of the specific algorithms is that their likelihoods of success are negatively correlated: that is, when a combination of them is used to solve a problem, the probability that at least one of them will succeed is greater than the sum of probabilities of success of the individual algorithms operating independently of each other. In ASPEN, the portfolio of algorithms is used in a planning process of the iterative repair type, in which conflicts are detected and addressed one at a time until either no conflicts exist or a user-defined time limit has been exceeded. At each choice point (e.g., selection of conflict; selection of method of resolution of conflict; or choice of move, addition, or deletion) ASPEN makes a stochastic choice of a combination of algorithms from the portfolio. This approach makes it possible for the search to escape from looping and from solutions that are locally but not globally optimum.

  19. Nonlinear rescaling of control values simplifies fuzzy control

    NASA Technical Reports Server (NTRS)

    Vanlangingham, H.; Tsoukkas, A.; Kreinovich, V.; Quintana, C.

    1993-01-01

    Traditional control theory is well-developed mainly for linear control situations. In non-linear cases there is no general method of generating a good control, so we have to rely on the ability of the experts (operators) to control them. If we want to automate their control, we must acquire their knowledge and translate it into a precise control strategy. The experts' knowledge is usually represented in non-numeric terms, namely, in terms of uncertain statements of the type 'if the obstacle is straight ahead, the distance to it is small, and the velocity of the car is medium, press the brakes hard'. Fuzzy control is a methodology that translates such statements into precise formulas for control. The necessary first step of this strategy consists of assigning membership functions to all the terms that the expert uses in his rules (in our sample phrase these words are 'small', 'medium', and 'hard'). The appropriate choice of a membership function can drastically improve the quality of a fuzzy control. In the simplest cases, we can take the functions whose domains have equally spaced endpoints. Because of that, many software packages for fuzzy control are based on this choice of membership functions. This choice is not very efficient in more complicated cases. Therefore, methods have been developed that use neural networks or generic algorithms to 'tune' membership functions. But this tuning takes lots of time (for example, several thousands iterations are typical for neural networks). In some cases there are evident physical reasons why equally space domains do not work: e.g., if the control variable u is always positive (i.e., if we control temperature in a reactor), then negative values (that are generated by equal spacing) simply make no sense. In this case it sounds reasonable to choose another scale u' = f(u) to represent u, so that equal spacing will work fine for u'. In the present paper we formulate the problem of finding the best rescaling function, solve this problem, and show (on a real-life example) that after an optimal rescaling, the un-tuned fuzzy control can be as good as the best state-of-art traditional non-linear controls.

  20. Studies 1. The Yugoslav Serbo-Croatian-English Contrastive Project.

    ERIC Educational Resources Information Center

    Filipovic, Rudolf, Ed.

    The first volume in this series on Serbo-Croatian-English contrastive analysis contains four articles. They are: "Contrasting via Translation: Formal Correspondence vs. Translation Equivalence," by Vladimir Ivir; "Approach to Contrastive Analysis," by Leonardo Spalatin; and "The Choice of the Corpus for the Contrastive Analysis of Serbo-Croatian…

  1. Efficient implementation of the Metropolis-Hastings algorithm, with application to the Cormack?Jolly?Seber model

    USGS Publications Warehouse

    Link, W.A.; Barker, R.J.

    2008-01-01

    Judicious choice of candidate generating distributions improves efficiency of the Metropolis-Hastings algorithm. In Bayesian applications, it is sometimes possible to identify an approximation to the target posterior distribution; this approximate posterior distribution is a good choice for candidate generation. These observations are applied to analysis of the Cormack?Jolly?Seber model and its extensions.

  2. Knowledge of healthy foods does not translate to healthy snack consumption among exercise science undergraduates.

    PubMed

    McArthur, Laura H; Valentino, Antonette; Holbert, Donald

    2017-06-01

    This cross-sectional survey study compared the on- and off-campus snack choices and related correlates of convenience samples of exercise science (ES) ( n = 165, M = 45%, F = 55%) and non-exercise science (NES) ( n =160, M = 43%, F = 57%) undergraduates. The hypothesis posed was that knowledge of healthy foods will not translate to healthier snack consumption by the ES students, and that the snack choices and related correlates of ES and NES students will be similar. Data were collected using self-administered questionnaires completed in classrooms (ES sample) and at high-traffic locations on-campus (NES sample). Chi-square and t-test analyses compared ES and NES students on snack correlates. Snacks consumed most often by the ES and NES students on-campus were health bars/squares ( n = 56 vs. n = 48) and savory snacks ( n = 55 vs. n = 71), and off-campus were savory snacks ( n = 60 vs. n = 71) and fruits ( n = 41 vs. n = 34). Over half of both samples believed their snack choices were a mix of unhealthy and healthy. Fruits were considered healthier snacks and chips less healthy by both samples, and fruits were the most often recommended snack. About 20% believed these choices would impact their health unfavorably, and about two thirds self-classified in the action stages for healthy snacking. Since knowledge about healthy food choices did not translate to healthy snack selection, these students would benefit from interventions that teach selection and preparation of healthy snacks on a restricted budget.

  3. Assessment of the information content of patterns: an algorithm

    NASA Astrophysics Data System (ADS)

    Daemi, M. Farhang; Beurle, R. L.

    1991-12-01

    A preliminary investigation confirmed the possibility of assessing the translational and rotational information content of simple artificial images. The calculation is tedious, and for more realistic patterns it is essential to implement the method on a computer. This paper describes an algorithm developed for this purpose which confirms the results of the preliminary investigation. Use of the algorithm facilitates much more comprehensive analysis of the combined effect of continuous rotation and fine translation, and paves the way for analysis of more realistic patterns. Owing to the volume of calculation involved in these algorithms, extensive computing facilities were necessary. The major part of the work was carried out using an ICL 3900 series mainframe computer as well as other powerful workstations such as a RISC architecture MIPS machine.

  4. Complications of Translating the Meanings of the Holy Qur'an at Word Level in the English Language in Relation to Frame Semantic Theory

    ERIC Educational Resources Information Center

    Balla, Asjad Ahmed Saeed; Siddiek, Ahmed Gumaa

    2017-01-01

    The present study is an attempt to investigate the problems resulting from the lexical choice in the translation of the Holy Qur'an to emphasize the importance of the theory of "Frame Semantics" in the translation process. It has been conducted with the aim of measuring the difference in concept between the two languages Arabic and…

  5. Ensemble of hybrid genetic algorithm for two-dimensional phase unwrapping

    NASA Astrophysics Data System (ADS)

    Balakrishnan, D.; Quan, C.; Tay, C. J.

    2013-06-01

    The phase unwrapping is the final and trickiest step in any phase retrieval technique. Phase unwrapping by artificial intelligence methods (optimization algorithms) such as hybrid genetic algorithm, reverse simulated annealing, particle swarm optimization, minimum cost matching showed better results than conventional phase unwrapping methods. In this paper, Ensemble of hybrid genetic algorithm with parallel populations is proposed to solve the branch-cut phase unwrapping problem. In a single populated hybrid genetic algorithm, the selection, cross-over and mutation operators are applied to obtain new population in every generation. The parameters and choice of operators will affect the performance of the hybrid genetic algorithm. The ensemble of hybrid genetic algorithm will facilitate to have different parameters set and different choice of operators simultaneously. Each population will use different set of parameters and the offspring of each population will compete against the offspring of all other populations, which use different set of parameters. The effectiveness of proposed algorithm is demonstrated by phase unwrapping examples and advantages of the proposed method are discussed.

  6. Recent Translational Findings on Impulsivity in Relation to Drug Abuse

    PubMed Central

    Weafer, Jessica; Mitchell, Suzanne H.

    2015-01-01

    Impulsive behavior is strongly implicated in drug abuse, as both a cause and a consequence of drug use. To understand how impulsive behaviors lead to and result from drug use, translational evidence from both human and non-human animal studies is needed. Here, we review recent (2009 or later) studies that have investigated two major components of impulsive behavior, inhibitory control and impulsive choice, across preclinical and clinical studies. We concentrate on the stop-signal task as the measure of inhibitory control and delay discounting as the measure of impulsive choice. Consistent with previous reports, recent studies show greater impulsive behavior in drug users compared with non-users. Additionally, new evidence supports the prospective role of impulsive behavior in drug abuse, and has begun to identify the neurobiological mechanisms underlying impulsive behavior. We focus on the commonalities and differences in findings between preclinical and clinical studies, and suggest future directions for translational research. PMID:25678985

  7. Parameterizing Phrase Based Statistical Machine Translation Models: An Analytic Study

    ERIC Educational Resources Information Center

    Cer, Daniel

    2011-01-01

    The goal of this dissertation is to determine the best way to train a statistical machine translation system. I first develop a state-of-the-art machine translation system called Phrasal and then use it to examine a wide variety of potential learning algorithms and optimization criteria and arrive at two very surprising results. First, despite the…

  8. Galaxy And Mass Assembly: automatic morphological classification of galaxies using statistical learning

    NASA Astrophysics Data System (ADS)

    Sreejith, Sreevarsha; Pereverzyev, Sergiy, Jr.; Kelvin, Lee S.; Marleau, Francine R.; Haltmeier, Markus; Ebner, Judith; Bland-Hawthorn, Joss; Driver, Simon P.; Graham, Alister W.; Holwerda, Benne W.; Hopkins, Andrew M.; Liske, Jochen; Loveday, Jon; Moffett, Amanda J.; Pimbblet, Kevin A.; Taylor, Edward N.; Wang, Lingyu; Wright, Angus H.

    2018-03-01

    We apply four statistical learning methods to a sample of 7941 galaxies (z < 0.06) from the Galaxy And Mass Assembly survey to test the feasibility of using automated algorithms to classify galaxies. Using 10 features measured for each galaxy (sizes, colours, shape parameters, and stellar mass), we apply the techniques of Support Vector Machines, Classification Trees, Classification Trees with Random Forest (CTRF) and Neural Networks, and returning True Prediction Ratios (TPRs) of 75.8 per cent, 69.0 per cent, 76.2 per cent, and 76.0 per cent, respectively. Those occasions whereby all four algorithms agree with each other yet disagree with the visual classification (`unanimous disagreement') serves as a potential indicator of human error in classification, occurring in ˜ 9 per cent of ellipticals, ˜ 9 per cent of little blue spheroids, ˜ 14 per cent of early-type spirals, ˜ 21 per cent of intermediate-type spirals, and ˜ 4 per cent of late-type spirals and irregulars. We observe that the choice of parameters rather than that of algorithms is more crucial in determining classification accuracy. Due to its simplicity in formulation and implementation, we recommend the CTRF algorithm for classifying future galaxy data sets. Adopting the CTRF algorithm, the TPRs of the five galaxy types are : E, 70.1 per cent; LBS, 75.6 per cent; S0-Sa, 63.6 per cent; Sab-Scd, 56.4 per cent, and Sd-Irr, 88.9 per cent. Further, we train a binary classifier using this CTRF algorithm that divides galaxies into spheroid-dominated (E, LBS, and S0-Sa) and disc-dominated (Sab-Scd and Sd-Irr), achieving an overall accuracy of 89.8 per cent. This translates into an accuracy of 84.9 per cent for spheroid-dominated systems and 92.5 per cent for disc-dominated systems.

  9. Learning the Structure of Biomedical Relationships from Unstructured Text

    PubMed Central

    Percha, Bethany; Altman, Russ B.

    2015-01-01

    The published biomedical research literature encompasses most of our understanding of how drugs interact with gene products to produce physiological responses (phenotypes). Unfortunately, this information is distributed throughout the unstructured text of over 23 million articles. The creation of structured resources that catalog the relationships between drugs and genes would accelerate the translation of basic molecular knowledge into discoveries of genomic biomarkers for drug response and prediction of unexpected drug-drug interactions. Extracting these relationships from natural language sentences on such a large scale, however, requires text mining algorithms that can recognize when different-looking statements are expressing similar ideas. Here we describe a novel algorithm, Ensemble Biclustering for Classification (EBC), that learns the structure of biomedical relationships automatically from text, overcoming differences in word choice and sentence structure. We validate EBC's performance against manually-curated sets of (1) pharmacogenomic relationships from PharmGKB and (2) drug-target relationships from DrugBank, and use it to discover new drug-gene relationships for both knowledge bases. We then apply EBC to map the complete universe of drug-gene relationships based on their descriptions in Medline, revealing unexpected structure that challenges current notions about how these relationships are expressed in text. For instance, we learn that newer experimental findings are described in consistently different ways than established knowledge, and that seemingly pure classes of relationships can exhibit interesting chimeric structure. The EBC algorithm is flexible and adaptable to a wide range of problems in biomedical text mining. PMID:26219079

  10. Image stack alignment in full-field X-ray absorption spectroscopy using SIFT_PyOCL.

    PubMed

    Paleo, Pierre; Pouyet, Emeline; Kieffer, Jérôme

    2014-03-01

    Full-field X-ray absorption spectroscopy experiments allow the acquisition of millions of spectra within minutes. However, the construction of the hyperspectral image requires an image alignment procedure with sub-pixel precision. While the image correlation algorithm has originally been used for image re-alignment using translations, the Scale Invariant Feature Transform (SIFT) algorithm (which is by design robust versus rotation, illumination change, translation and scaling) presents an additional advantage: the alignment can be limited to a region of interest of any arbitrary shape. In this context, a Python module, named SIFT_PyOCL, has been developed. It implements a parallel version of the SIFT algorithm in OpenCL, providing high-speed image registration and alignment both on processors and graphics cards. The performance of the algorithm allows online processing of large datasets.

  11. Development of a translational model to screen medications for cocaine use disorder II: Choice between intravenous cocaine and money in humans

    PubMed Central

    Lile, Joshua A.; Stoops, William W.; Rush, Craig R.; Negus, S. Stevens; Glaser, Paul E. A.; Hatton, Kevin W.; Hays, Lon R.

    2016-01-01

    Background A medication for treating cocaine use disorder has yet to be approved. Laboratory-based evaluation of candidate medications in animals and humans is a valuable means to demonstrate safety, tolerability and initial efficacy of potential medications. However, animal-to-human translation has been hampered by a lack of coordination. Therefore, we designed homologous cocaine self-administration studies in rhesus monkeys (see companion article) and human subjects in an attempt to develop linked, functionally equivalent procedures for research on candidate medications for cocaine use disorder. Methods Eight (N=8) subjects with cocaine use disorder completed 12 experimental sessions in which they responded to receive money ($0.01, $1.00 and $3.00) or intravenous cocaine (0, 3, 10 and 30 mg/70 kg) under independent, concurrent progressive-ratio schedules. Prior to the completion of 9 choice trials, subjects sampled the cocaine dose available during that session and were informed of the monetary alternative value. Results The allocation of behavior varied systematically as a function of cocaine dose and money value. Moreover, a similar pattern of cocaine choice was demonstrated in rhesus monkeys and humans across different cocaine doses and magnitudes of the species-specific alternative reinforcers. The subjective and cardiovascular responses to IV cocaine were an orderly function of dose, although heart rate and blood pressure remained within safe limits. Conclusions These coordinated studies successfully established drug vs. non-drug choice procedures in humans and rhesus monkeys that yielded similar cocaine choice behavior across species. This translational research platform will be used in future research to enhance the efficiency of developing interventions to reduce cocaine use. PMID:27269368

  12. Development of a translational model to screen medications for cocaine use disorder II: Choice between intravenous cocaine and money in humans.

    PubMed

    Lile, Joshua A; Stoops, William W; Rush, Craig R; Negus, S Stevens; Glaser, Paul E A; Hatton, Kevin W; Hays, Lon R

    2016-08-01

    A medication for treating cocaine use disorder has yet to be approved. Laboratory-based evaluation of candidate medications in animals and humans is a valuable means to demonstrate safety, tolerability and initial efficacy of potential medications. However, animal-to-human translation has been hampered by a lack of coordination. Therefore, we designed homologous cocaine self-administration studies in rhesus monkeys (see companion article) and human subjects in an attempt to develop linked, functionally equivalent procedures for research on candidate medications for cocaine use disorder. Eight (N=8) subjects with cocaine use disorder completed 12 experimental sessions in which they responded to receive money ($0.01, $1.00 and $3.00) or intravenous cocaine (0, 3, 10 and 30mg/70kg) under independent, concurrent progressive-ratio schedules. Prior to the completion of 9 choice trials, subjects sampled the cocaine dose available during that session and were informed of the monetary alternative value. The allocation of behavior varied systematically as a function of cocaine dose and money value. Moreover, a similar pattern of cocaine choice was demonstrated in rhesus monkeys and humans across different cocaine doses and magnitudes of the species-specific alternative reinforcers. The subjective and cardiovascular responses to IV cocaine were an orderly function of dose, although heart rate and blood pressure remained within safe limits. These coordinated studies successfully established drug versus non-drug choice procedures in humans and rhesus monkeys that yielded similar cocaine choice behavior across species. This translational research platform will be used in future research to enhance the efficiency of developing interventions to reduce cocaine use. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  13. The validation and translation of Multidimensional Measure of Informed Choice in Greek.

    PubMed

    Gourounti, Kleanthi; Sandall, Jane

    2011-04-01

    to translate the original English version of the Multidimensional Measure of Informed Choice (MMIC) into Greek, to adapt it culturally to Greece, and to determine its psychometric properties for the assessment of informed choice in antenatal screening for Down syndrome. survey using self-administrated questionnaires. public hospital in Athens, Greece. 135 pregnant women with gestational age between 11th and 20th week just prior to having antenatal screening for Down syndrome. 96% of women had a positive attitude towards screening and 45% had a good level of knowledge concerning the screening process for Down syndrome. Using a standard measure of informed choice, validated for use in Greek, it was found that 44% of women made an informed choice, and thus 56% of women made an uninformed choice. The internal consistency of the scales was good; Cronbach's alpha was found to be 0.76 for the attitude scale and 0.64 for the knowledge scale, suggesting that all items were appropriate to measure. The performed factor analysis of the attitude scale indicated three factors with an eigenvalue over 1.0. Those factors were responsible for 87% of the variance. this study indicates that the Greek version of the MMIC appears to be a reliable and valid tool for measuring informed choice in antenatal screening for Down syndrome. Due to its short length and consumption of time, it seems to be a practical instrument for use in Greek antenatal clinics. Copyright © 2009 Elsevier Ltd. All rights reserved.

  14. Genetic Algorithms for Multiple-Choice Problems

    NASA Astrophysics Data System (ADS)

    Aickelin, Uwe

    2010-04-01

    This thesis investigates the use of problem-specific knowledge to enhance a genetic algorithm approach to multiple-choice optimisation problems.It shows that such information can significantly enhance performance, but that the choice of information and the way it is included are important factors for success.Two multiple-choice problems are considered.The first is constructing a feasible nurse roster that considers as many requests as possible.In the second problem, shops are allocated to locations in a mall subject to constraints and maximising the overall income.Genetic algorithms are chosen for their well-known robustness and ability to solve large and complex discrete optimisation problems.However, a survey of the literature reveals room for further research into generic ways to include constraints into a genetic algorithm framework.Hence, the main theme of this work is to balance feasibility and cost of solutions.In particular, co-operative co-evolution with hierarchical sub-populations, problem structure exploiting repair schemes and indirect genetic algorithms with self-adjusting decoder functions are identified as promising approaches.The research starts by applying standard genetic algorithms to the problems and explaining the failure of such approaches due to epistasis.To overcome this, problem-specific information is added in a variety of ways, some of which are designed to increase the number of feasible solutions found whilst others are intended to improve the quality of such solutions.As well as a theoretical discussion as to the underlying reasons for using each operator,extensive computational experiments are carried out on a variety of data.These show that the indirect approach relies less on problem structure and hence is easier to implement and superior in solution quality.

  15. Sensor Network Localization by Eigenvector Synchronization Over the Euclidean Group

    PubMed Central

    CUCURINGU, MIHAI; LIPMAN, YARON; SINGER, AMIT

    2013-01-01

    We present a new approach to localization of sensors from noisy measurements of a subset of their Euclidean distances. Our algorithm starts by finding, embedding, and aligning uniquely realizable subsets of neighboring sensors called patches. In the noise-free case, each patch agrees with its global positioning up to an unknown rigid motion of translation, rotation, and possibly reflection. The reflections and rotations are estimated using the recently developed eigenvector synchronization algorithm, while the translations are estimated by solving an overdetermined linear system. The algorithm is scalable as the number of nodes increases and can be implemented in a distributed fashion. Extensive numerical experiments show that it compares favorably to other existing algorithms in terms of robustness to noise, sparse connectivity, and running time. While our approach is applicable to higher dimensions, in the current article, we focus on the two-dimensional case. PMID:23946700

  16. Fundamental limits of reconstruction-based superresolution algorithms under local translation.

    PubMed

    Lin, Zhouchen; Shum, Heung-Yeung

    2004-01-01

    Superresolution is a technique that can produce images of a higher resolution than that of the originally captured ones. Nevertheless, improvement in resolution using such a technique is very limited in practice. This makes it significant to study the problem: "Do fundamental limits exist for superresolution?" In this paper, we focus on a major class of superresolution algorithms, called the reconstruction-based algorithms, which compute high-resolution images by simulating the image formation process. Assuming local translation among low-resolution images, this paper is the first attempt to determine the explicit limits of reconstruction-based algorithms, under both real and synthetic conditions. Based on the perturbation theory of linear systems, we obtain the superresolution limits from the conditioning analysis of the coefficient matrix. Moreover, we determine the number of low-resolution images that are sufficient to achieve the limit. Both real and synthetic experiments are carried out to verify our analysis.

  17. Radiation and scattering from bodies of translation. Volume 2: User's manual, computer program documentation

    NASA Astrophysics Data System (ADS)

    Medgyesi-Mitschang, L. N.; Putnam, J. M.

    1980-04-01

    A hierarchy of computer programs implementing the method of moments for bodies of translation (MM/BOT) is described. The algorithm treats the far-field radiation and scattering from finite-length open cylinders of arbitrary cross section as well as the near fields and aperture-coupled fields for rectangular apertures on such bodies. The theoretical development underlying the algorithm is described in Volume 1. The structure of the computer algorithm is such that no a priori knowledge of the method of moments technique or detailed FORTRAN experience are presupposed for the user. A set of carefully drawn example problems illustrates all the options of the algorithm. For more detailed understanding of the workings of the codes, special cross referencing to the equations in Volume 1 is provided. For additional clarity, comment statements are liberally interspersed in the code listings, summarized in the present volume.

  18. Utility of preclinical drug versus food choice procedures to evaluate candidate medications for methamphetamine use disorder.

    PubMed

    Banks, Matthew L

    2017-04-01

    Substance use disorders are diagnosed as a manifestation of inappropriate behavioral allocation toward abused drugs and away from other behaviors maintained by more adaptive nondrug reinforcers (e.g., money and social relationships). Substance use disorder treatment goals include not only decreasing drug-maintained behavior but also promoting behavioral reallocation toward these socially adaptive alternative reinforcers. Preclinical drug self-administration procedures that offer concurrent access to both drug and nondrug reinforcers provide a translationally relevant dependent measure of behavioral allocation that may be useful for candidate medication evaluation. In contrast to other abused drugs, such as heroin or cocaine, preclinical methamphetamine versus food choice procedures have been a more recent development. We hypothesize that preclinical to clinical translatability would be improved by the evaluation of repeated pharmacological treatment effects on methamphetamine self-administration under a methamphetamine versus food choice procedure. In support of this hypothesis, a literature review suggests strong concordance between preclinical pharmacological treatment effects on methamphetamine versus food choice in nonhuman primates and clinical medication treatment effects on methamphetamine self-administration in human laboratory studies or methamphetamine abuse metrics in clinical trials. In conclusion, this literature suggests preclinical methamphetamine versus food choice procedures may be useful in developing innovative pharmacotherapies for methamphetamine use disorder. © 2016 New York Academy of Sciences.

  19. Utility of preclinical drug versus food choice procedures to evaluate candidate medications for methamphetamine use disorder

    PubMed Central

    Banks, Matthew L.

    2016-01-01

    Substance use disorders are diagnosed as a manifestation of inappropriate behavioral allocation towards abused drugs and away from other behaviors maintained by more adaptive nondrug reinforcers (e.g., work and social relationships). Substance use disorder treatment goals include not only decreasing drug-maintained behavior but also promoting behavioral reallocation toward these socially adaptive alternative reinforcers. Preclinical drug self-administration procedures that offer concurrent access to both drug and nondrug reinforcers provide a translationally relevant dependent measure of behavioral allocation that may be useful for candidate medication evaluation. In contrast to other abused drugs, such as heroin or cocaine, preclinical methamphetamine versus food choice procedures have been a more recent development. We hypothesize that preclinical to clinical translatability would be improved by the evaluation of repeated pharmacological treatment effects on methamphetamine self-administration under a methamphetamine versus food choice procedure. In support of this hypothesis, a literature review suggests strong concordance between preclinical pharmacological treatment effects on methamphetamine versus food choice in nonhuman primates and clinical medication treatment effects on methamphetamine self-administration in human laboratory studies or methamphetamine abuse metrics in clinical trials. In conclusion, this literature suggests preclinical methamphetamine versus food choice procedures may be useful in developing innovative pharmacotherapies for methamphetamine use disorder. PMID:27936284

  20. On the Right To Use the Language of One's Choice in Slovakia.

    ERIC Educational Resources Information Center

    Kontra, Miklos

    1997-01-01

    The text of a November 1995 Slovak Republic law concerning language use in that country is translated and analyzed from the perspective of a recent Linguistic Society of America (LSA) statement on language rights stating that speakers be allowed to express themselves, publicly or privately, in the language of their choice. The law provides that…

  1. School Choice: How an Abstract Idea Became a Political Reality

    ERIC Educational Resources Information Center

    Viteritti, Joseph P.

    2005-01-01

    This paper traces the evolution of the choice idea over three generations, from a market model concerned with economic liberty, to a demand for social justice based on equality, to a political movement that translates the idea into policy. Focusing on the last generation, it explains why the market concept has lacked political appeal and how…

  2. Predicting translational deformity following opening-wedge osteotomy for lower limb realignment.

    PubMed

    Barksfield, Richard C; Monsell, Fergal P

    2015-11-01

    An opening-wedge osteotomy is well recognised for the management of limb deformity and requires an understanding of the principles of geometry. Translation at the osteotomy is needed when the osteotomy is performed away from the centre of rotation of angulation (CORA), but the amount of translation varies with the distance from the CORA. This translation enables proximal and distal axes on either side of the proposed osteotomy to realign. We have developed two experimental models to establish whether the amount of translation required (based on the translation deformity created) can be predicted based upon simple trigonometry. A predictive algorithm was derived where translational deformity was predicted as 2(tan α × d), where α represents 50 % of the desired angular correction, and d is the distance of the desired osteotomy site from the CORA. A simulated model was developed using TraumaCad online digital software suite (Brainlab AG, Germany). Osteotomies were simulated in the distal femur, proximal tibia and distal tibia for nine sets of lower limb scanograms at incremental distances from the CORA and the resulting translational deformity recorded. There was strong correlation between the distance of the osteotomy from the CORA and simulated translation deformity for distal femoral deformities (correlation coefficient 0.99, p < 0.0001), proximal tibial deformities (correlation coefficient 0.93-0.99, p < 0.0001) and distal tibial deformities (correlation coefficient 0.99, p < 0.0001). There was excellent agreement between the predictive algorithm and simulated translational deformity for all nine simulations (correlation coefficient 0.93-0.99, p < 0.0001). Translational deformity following corrective osteotomy for lower limb deformity can be anticipated and predicted based upon the angular correction and the distance between the planned osteotomy site and the CORA.

  3. The PlusCal Algorithm Language

    NASA Astrophysics Data System (ADS)

    Lamport, Leslie

    Algorithms are different from programs and should not be described with programming languages. The only simple alternative to programming languages has been pseudo-code. PlusCal is an algorithm language that can be used right now to replace pseudo-code, for both sequential and concurrent algorithms. It is based on the TLA + specification language, and a PlusCal algorithm is automatically translated to a TLA + specification that can be checked with the TLC model checker and reasoned about formally.

  4. Surgical motion characterization in simulated needle insertion procedures

    NASA Astrophysics Data System (ADS)

    Holden, Matthew S.; Ungi, Tamas; Sargent, Derek; McGraw, Robert C.; Fichtinger, Gabor

    2012-02-01

    PURPOSE: Evaluation of surgical performance in image-guided needle insertions is of emerging interest, to both promote patient safety and improve the efficiency and effectiveness of training. The purpose of this study was to determine if a Markov model-based algorithm can more accurately segment a needle-based surgical procedure into its five constituent tasks than a simple threshold-based algorithm. METHODS: Simulated needle trajectories were generated with known ground truth segmentation by a synthetic procedural data generator, with random noise added to each degree of freedom of motion. The respective learning algorithms were trained, and then tested on different procedures to determine task segmentation accuracy. In the threshold-based algorithm, a change in tasks was detected when the needle crossed a position/velocity threshold. In the Markov model-based algorithm, task segmentation was performed by identifying the sequence of Markov models most likely to have produced the series of observations. RESULTS: For amplitudes of translational noise greater than 0.01mm, the Markov model-based algorithm was significantly more accurate in task segmentation than the threshold-based algorithm (82.3% vs. 49.9%, p<0.001 for amplitude 10.0mm). For amplitudes less than 0.01mm, the two algorithms produced insignificantly different results. CONCLUSION: Task segmentation of simulated needle insertion procedures was improved by using a Markov model-based algorithm as opposed to a threshold-based algorithm for procedures involving translational noise.

  5. Development and Translation of Hybrid Optoacoustic/Ultrasonic Tomography for Early Breast Cancer Detection

    DTIC Science & Technology

    2014-09-01

    to develop an optimized system design and associated image reconstruction algorithms for a hybrid three-dimensional (3D) breast imaging system that...research is to develop an optimized system design and associated image reconstruction algorithms for a hybrid three-dimensional (3D) breast imaging ...i) developed time-of- flight extraction algorithms to perform USCT, (ii) developing image reconstruction algorithms for USCT, (iii) developed

  6. A comparison of the performance of threshold criteria for binary classification in terms of predicted prevalence and Kappa

    Treesearch

    Elizabeth A. Freeman; Gretchen G. Moisen

    2008-01-01

    Modelling techniques used in binary classification problems often result in a predicted probability surface, which is then translated into a presence - absence classification map. However, this translation requires a (possibly subjective) choice of threshold above which the variable of interest is predicted to be present. The selection of this threshold value can have...

  7. Challenges and Insights in Using HIPAA Privacy Rule for Clinical Text Annotation.

    PubMed

    Kayaalp, Mehmet; Browne, Allen C; Sagan, Pamela; McGee, Tyne; McDonald, Clement J

    2015-01-01

    The Privacy Rule of Health Insurance Portability and Accountability Act (HIPAA) requires that clinical documents be stripped of personally identifying information before they can be released to researchers and others. We have been manually annotating clinical text since 2008 in order to test and evaluate an algorithmic clinical text de-identification tool, NLM Scrubber, which we have been developing in parallel. Although HIPAA provides some guidance about what must be de-identified, translating those guidelines into practice is not as straightforward, especially when one deals with free text. As a result we have changed our manual annotation labels and methods six times. This paper explains why we have made those annotation choices, which have been evolved throughout seven years of practice on this field. The aim of this paper is to start a community discussion towards developing standards for clinical text annotation with the end goal of studying and comparing clinical text de-identification systems more accurately.

  8. Autonomy and Privacy in Clinical Laboratory Science Policy and Practice.

    PubMed

    Leibach, Elizabeth Kenimer

    2014-01-01

    Rapid advancements in diagnostic technologies coupled with growth in testing options and choices mandate the development of evidence-based testing algorithms linked to the care paths of the major chronic diseases and health challenges encountered most frequently. As care paths are evaluated, patient/consumers become partners in healthcare delivery. Clinical laboratory scientists find themselves firmly embedded in both quality improvement and clinical research with an urgent need to translate clinical laboratory information into knowledge required by practitioners and patient/consumers alike. To implement this patient-centered care approach in clinical laboratory science, practitioners must understand their roles in (1) protecting patient/consumer autonomy in the healthcare informed consent process and (2) assuring patient/consumer privacy and confidentiality while blending quality improvement study findings with protected health information. A literature review, describing the current ethical environment, supports a consultative role for clinical laboratory scientists in the clinical decision-making process and suggests guidance for policy and practice regarding the principle of autonomy and its associated operational characteristics: informed consent and privacy.

  9. CLASSIFYING MEDICAL IMAGES USING MORPHOLOGICAL APPEARANCE MANIFOLDS.

    PubMed

    Varol, Erdem; Gaonkar, Bilwaj; Davatzikos, Christos

    2013-12-31

    Input features for medical image classification algorithms are extracted from raw images using a series of pre processing steps. One common preprocessing step in computational neuroanatomy and functional brain mapping is the nonlinear registration of raw images to a common template space. Typically, the registration methods used are parametric and their output varies greatly with changes in parameters. Most results reported previously perform registration using a fixed parameter setting and use the results as input to the subsequent classification step. The variation in registration results due to choice of parameters thus translates to variation of performance of the classifiers that depend on the registration step for input. Analogous issues have been investigated in the computer vision literature, where image appearance varies with pose and illumination, thereby making classification vulnerable to these confounding parameters. The proposed methodology addresses this issue by sampling image appearances as registration parameters vary, and shows that better classification accuracies can be obtained this way, compared to the conventional approach.

  10. Synonymous codon choices in the extremely GC-poor genome of Plasmodium falciparum: compositional constraints and translational selection.

    PubMed

    Musto, H; Romero, H; Zavala, A; Jabbari, K; Bernardi, G

    1999-07-01

    We have analyzed the patterns of synonymous codon preferences of the nuclear genes of Plasmodium falciparum, a unicellular parasite characterized by an extremely GC-poor genome. When all genes are considered, codon usage is strongly biased toward A and T in third codon positions, as expected, but multivariate statistical analysis detects a major trend among genes. At one end genes display codon choices determined mainly by the extreme genome composition of this parasite, and very probably their expression level is low. At the other end a few genes exhibit an increased relative usage of a particular subset of codons, many of which are C-ending. Since the majority of these few genes is putatively highly expressed, we postulate that the increased C-ending codons are translationally optimal. In conclusion, while codon usage of the majority of P. falciparum genes is determined mainly by compositional constraints, a small number of genes exhibit translational selection.

  11. A Two-Stage Algorithm for Origin-Destination Matrices Estimation Considering Dynamic Dispersion Parameter for Route Choice

    PubMed Central

    Wang, Yong; Ma, Xiaolei; Liu, Yong; Gong, Ke; Henricakson, Kristian C.; Xu, Maozeng; Wang, Yinhai

    2016-01-01

    This paper proposes a two-stage algorithm to simultaneously estimate origin-destination (OD) matrix, link choice proportion, and dispersion parameter using partial traffic counts in a congested network. A non-linear optimization model is developed which incorporates a dynamic dispersion parameter, followed by a two-stage algorithm in which Generalized Least Squares (GLS) estimation and a Stochastic User Equilibrium (SUE) assignment model are iteratively applied until the convergence is reached. To evaluate the performance of the algorithm, the proposed approach is implemented in a hypothetical network using input data with high error, and tested under a range of variation coefficients. The root mean squared error (RMSE) of the estimated OD demand and link flows are used to evaluate the model estimation results. The results indicate that the estimated dispersion parameter theta is insensitive to the choice of variation coefficients. The proposed approach is shown to outperform two established OD estimation methods and produce parameter estimates that are close to the ground truth. In addition, the proposed approach is applied to an empirical network in Seattle, WA to validate the robustness and practicality of this methodology. In summary, this study proposes and evaluates an innovative computational approach to accurately estimate OD matrices using link-level traffic flow data, and provides useful insight for optimal parameter selection in modeling travelers’ route choice behavior. PMID:26761209

  12. Improved numerical methods for infinite spin chains with long-range interactions

    NASA Astrophysics Data System (ADS)

    Nebendahl, V.; Dür, W.

    2013-02-01

    We present several improvements of the infinite matrix product state (iMPS) algorithm for finding ground states of one-dimensional quantum systems with long-range interactions. As a main ingredient, we introduce the superposed multioptimization method, which allows an efficient optimization of exponentially many MPS of different lengths at different sites all in one step. Here, the algorithm becomes protected against position-dependent effects as caused by spontaneously broken translational invariance. So far, these have been a major obstacle to convergence for the iMPS algorithm if no prior knowledge of the system's translational symmetry was accessible. Further, we investigate some more general methods to speed up calculations and improve convergence, which might be partially interesting in a much broader context, too. As a more special problem, we also look into translational invariant states close to an invariance-breaking phase transition and show how to avoid convergence into wrong local minima for such systems. Finally, we apply these methods to polar bosons with long-range interactions. We calculate several detailed Devil's staircases with the corresponding phase diagrams and investigate some supersolid properties.

  13. Use of an algorithm in choosing abdominoplasty techniques.

    PubMed

    Fernandes, Júlio Wilson; Damin, Renata; Holzmann, Marcos Vinícius Nasser; Ribas, Gabriel Gomes DE Oliveira

    2018-01-01

    to validate an algorithm for the choice of the abdominoplasty surgical technique among the five approaches established in the literature, according to the characteristics of the abdominal wall. we conducted a retrospective study of 245 patients undergoing abdominoplasty, for whom the method of choice of the surgical technique was the proposed algorithm, based on the degree of abdominal flaccidity determined by bimanual maneuver. We studied its applications and conveniences, as well as the complications inherent in each group studied. according to the algorithm used, the most frequently chosen technique was "Technique IV" (transverse dermolipectomy of Pitanguy - or with a Baroudi-Kepke incision), in 25.71% of the cases. "Technique I" (mini abdominoplasty) had the lowest incidence and the lowest rate of complications. On the opposite, "Technique III", dermolipectomy with remaining vertical scarring, presented a higher incidence of complications, requiring extreme caution in its indication, particularly in relation to patients' expectations regarding the resulting scar and its legal aspects. Among all conducts, the most frequent complication was seroma, with a 10.2% occurrence, solved by simple syringe aspiration and use of elastic compression mesh. the proposed algorithm facilitated the choice of abdominoplasty techniques, offering satisfactory results, which are in line with the complication rates published in the world literature.

  14. Serotonin Depletion Induces ‘Waiting Impulsivity' on the Human Four-Choice Serial Reaction Time Task: Cross-Species Translational Significance

    PubMed Central

    Worbe, Yulia; Savulich, George; Voon, Valerie; Fernandez-Egea, Emilio; Robbins, Trevor W

    2014-01-01

    Convergent results from animal and human studies suggest that reducing serotonin neurotransmission promotes impulsive behavior. Here, serotonin depletion was induced by the dietary tryptophan depletion procedure (TD) in healthy volunteers to examine the role of serotonin in impulsive action and impulsive choice. We used a novel translational analog of a rodent 5-choice serial reaction time task (5-CSRTT)— the human 4-CSRTT—and a reward delay-discounting questionnaire to measure effects on these different forms of ‘waiting impulsivity'. There was no effect of TD on impulsive choice as indexed by the reward delay-discounting questionnaire. However, TD significantly increased 4-CSRTT premature responses (or impulsive action), which is remarkably similar to the previous findings of effect of serotonin depletion on rodent 5-CSRTT performance. Moreover, the increased premature responding in TD correlated significantly with individual differences on the motor impulsivity subscale of the Barratt Impulsivity Scale. TD also improved the accuracy of performance and speeded responding, possibly indicating enhanced attention and reward processing. The results suggest: (i) the 4-CSRTT will be a valuable addition to the tests already available to measure impulsivity in humans in a direct translational analog of a test extensively used in rodents; (ii) TD in humans produces a qualitatively similar profile of effects to those in rodents (ie, enhancing premature responding), hence supporting the conclusion that TD in humans exerts at least some of its effects on central serotonin; and (iii) this manipulation of serotonin produces dissociable effects on different measures of impulsivity, suggesting considerable specificity in its modulatory role. PMID:24385133

  15. Solving SAT Problem Based on Hybrid Differential Evolution Algorithm

    NASA Astrophysics Data System (ADS)

    Liu, Kunqi; Zhang, Jingmin; Liu, Gang; Kang, Lishan

    Satisfiability (SAT) problem is an NP-complete problem. Based on the analysis about it, SAT problem is translated equally into an optimization problem on the minimum of objective function. A hybrid differential evolution algorithm is proposed to solve the Satisfiability problem. It makes full use of strong local search capacity of hill-climbing algorithm and strong global search capability of differential evolution algorithm, which makes up their disadvantages, improves the efficiency of algorithm and avoids the stagnation phenomenon. The experiment results show that the hybrid algorithm is efficient in solving SAT problem.

  16. The Kidney Disease Screening and Awareness Program (KDSAP): A Novel Translatable Model for Increasing Interest in Nephrology Careers

    PubMed Central

    Wu, Jingshing; Yeh, Albert C.; Shieh, Eric C.; Cui, Cheryl; Polding, Laura C.; Ahmed, Rayhnuma; Lim, Kenneth; Lu, Tzong-Shi; Rhee, Connie M.; Bonventre, Joseph V.

    2014-01-01

    Despite the increasing prevalence of CKD in the United States, there is a declining interest among United States medical graduates in nephrology as a career choice. Effective programs are needed to generate interest at early educational stages when career choices can be influenced. The Kidney Disease Screening and Awareness Program (KDSAP) is a novel program initiated at Harvard College that increases student knowledge of and interest in kidney health and disease, interest in nephrology career paths, and participation in kidney disease research. This model, built on physician mentoring, kidney screening of underserved populations, direct interactions with kidney patients, and opportunities to participate in kidney research, can be reproduced and translated to other workforce-challenged subspecialties. PMID:24876120

  17. Eye Movements in Darkness Modulate Self-Motion Perception.

    PubMed

    Clemens, Ivar Adrianus H; Selen, Luc P J; Pomante, Antonella; MacNeilage, Paul R; Medendorp, W Pieter

    2017-01-01

    During self-motion, humans typically move the eyes to maintain fixation on the stationary environment around them. These eye movements could in principle be used to estimate self-motion, but their impact on perception is unknown. We had participants judge self-motion during different eye-movement conditions in the absence of full-field optic flow. In a two-alternative forced choice task, participants indicated whether the second of two successive passive lateral whole-body translations was longer or shorter than the first. This task was used in two experiments. In the first ( n = 8), eye movements were constrained differently in the two translation intervals by presenting either a world-fixed or body-fixed fixation point or no fixation point at all (allowing free gaze). Results show that perceived translations were shorter with a body-fixed than a world-fixed fixation point. A linear model indicated that eye-movement signals received a weight of ∼25% for the self-motion percept. This model was independently validated in the trials without a fixation point (free gaze). In the second experiment ( n = 10), gaze was free during both translation intervals. Results show that the translation with the larger eye-movement excursion was judged more often to be larger than chance, based on an oculomotor choice probability analysis. We conclude that eye-movement signals influence self-motion perception, even in the absence of visual stimulation.

  18. Eye Movements in Darkness Modulate Self-Motion Perception

    PubMed Central

    Pomante, Antonella

    2017-01-01

    Abstract During self-motion, humans typically move the eyes to maintain fixation on the stationary environment around them. These eye movements could in principle be used to estimate self-motion, but their impact on perception is unknown. We had participants judge self-motion during different eye-movement conditions in the absence of full-field optic flow. In a two-alternative forced choice task, participants indicated whether the second of two successive passive lateral whole-body translations was longer or shorter than the first. This task was used in two experiments. In the first (n = 8), eye movements were constrained differently in the two translation intervals by presenting either a world-fixed or body-fixed fixation point or no fixation point at all (allowing free gaze). Results show that perceived translations were shorter with a body-fixed than a world-fixed fixation point. A linear model indicated that eye-movement signals received a weight of ∼25% for the self-motion percept. This model was independently validated in the trials without a fixation point (free gaze). In the second experiment (n = 10), gaze was free during both translation intervals. Results show that the translation with the larger eye-movement excursion was judged more often to be larger than chance, based on an oculomotor choice probability analysis. We conclude that eye-movement signals influence self-motion perception, even in the absence of visual stimulation. PMID:28144623

  19. Education and Work in Rural America--The Social Context of Early Career Decision and Achievement.

    ERIC Educational Resources Information Center

    Cosby, Arthur G., Ed.; Charner, Ivan, Ed.

    Career and career-related preferences rural youth made and estimation of degree to which choices were translated into adult behavior were investigated by tracing a rural sample of southern 1968 high school graduates through the first four years of post-high school. Focus was on choices expressed and attainments experienced with respect to…

  20. Runtime Analysis of Linear Temporal Logic Specifications

    NASA Technical Reports Server (NTRS)

    Giannakopoulou, Dimitra; Havelund, Klaus

    2001-01-01

    This report presents an approach to checking a running program against its Linear Temporal Logic (LTL) specifications. LTL is a widely used logic for expressing properties of programs viewed as sets of executions. Our approach consists of translating LTL formulae to finite-state automata, which are used as observers of the program behavior. The translation algorithm we propose modifies standard LTL to B chi automata conversion techniques to generate automata that check finite program traces. The algorithm has been implemented in a tool, which has been integrated with the generic JPaX framework for runtime analysis of Java programs.

  1. Image fusion using sparse overcomplete feature dictionaries

    DOEpatents

    Brumby, Steven P.; Bettencourt, Luis; Kenyon, Garrett T.; Chartrand, Rick; Wohlberg, Brendt

    2015-10-06

    Approaches for deciding what individuals in a population of visual system "neurons" are looking for using sparse overcomplete feature dictionaries are provided. A sparse overcomplete feature dictionary may be learned for an image dataset and a local sparse representation of the image dataset may be built using the learned feature dictionary. A local maximum pooling operation may be applied on the local sparse representation to produce a translation-tolerant representation of the image dataset. An object may then be classified and/or clustered within the translation-tolerant representation of the image dataset using a supervised classification algorithm and/or an unsupervised clustering algorithm.

  2. Automata-Based Verification of Temporal Properties on Running Programs

    NASA Technical Reports Server (NTRS)

    Giannakopoulou, Dimitra; Havelund, Klaus; Lan, Sonie (Technical Monitor)

    2001-01-01

    This paper presents an approach to checking a running program against its Linear Temporal Logic (LTL) specifications. LTL is a widely used logic for expressing properties of programs viewed as sets of executions. Our approach consists of translating LTL formulae to finite-state automata, which are used as observers of the program behavior. The translation algorithm we propose modifies standard LTL to Buchi automata conversion techniques to generate automata that check finite program traces. The algorithm has been implemented in a tool, which has been integrated with the generic JPaX framework for runtime analysis of Java programs.

  3. Knowledge translation of research findings.

    PubMed

    Grimshaw, Jeremy M; Eccles, Martin P; Lavis, John N; Hill, Sophie J; Squires, Janet E

    2012-05-31

    One of the most consistent findings from clinical and health services research is the failure to translate research into practice and policy. As a result of these evidence-practice and policy gaps, patients fail to benefit optimally from advances in healthcare and are exposed to unnecessary risks of iatrogenic harms, and healthcare systems are exposed to unnecessary expenditure resulting in significant opportunity costs. Over the last decade, there has been increasing international policy and research attention on how to reduce the evidence-practice and policy gap. In this paper, we summarise the current concepts and evidence to guide knowledge translation activities, defined as T2 research (the translation of new clinical knowledge into improved health). We structure the article around five key questions: what should be transferred; to whom should research knowledge be transferred; by whom should research knowledge be transferred; how should research knowledge be transferred; and, with what effect should research knowledge be transferred? We suggest that the basic unit of knowledge translation should usually be up-to-date systematic reviews or other syntheses of research findings. Knowledge translators need to identify the key messages for different target audiences and to fashion these in language and knowledge translation products that are easily assimilated by different audiences. The relative importance of knowledge translation to different target audiences will vary by the type of research and appropriate endpoints of knowledge translation may vary across different stakeholder groups. There are a large number of planned knowledge translation models, derived from different disciplinary, contextual (i.e., setting), and target audience viewpoints. Most of these suggest that planned knowledge translation for healthcare professionals and consumers is more likely to be successful if the choice of knowledge translation strategy is informed by an assessment of the likely barriers and facilitators. Although our evidence on the likely effectiveness of different strategies to overcome specific barriers remains incomplete, there is a range of informative systematic reviews of interventions aimed at healthcare professionals and consumers (i.e., patients, family members, and informal carers) and of factors important to research use by policy makers. There is a substantial (if incomplete) evidence base to guide choice of knowledge translation activities targeting healthcare professionals and consumers. The evidence base on the effects of different knowledge translation approaches targeting healthcare policy makers and senior managers is much weaker but there are a profusion of innovative approaches that warrant further evaluation.

  4. Development of an algorithm for improving quality and information processing capacity of MathSpeak synthetic speech renderings.

    PubMed

    Isaacson, M D; Srinivasan, S; Lloyd, L L

    2010-01-01

    MathSpeak is a set of rules for non speaking of mathematical expressions. These rules have been incorporated into a computerised module that translates printed mathematics into the non-ambiguous MathSpeak form for synthetic speech rendering. Differences between individual utterances produced with the translator module are difficult to discern because of insufficient pausing between utterances; hence, the purpose of this study was to develop an algorithm for improving the synthetic speech rendering of MathSpeak. To improve synthetic speech renderings, an algorithm for inserting pauses was developed based upon recordings of middle and high school math teachers speaking mathematic expressions. Efficacy testing of this algorithm was conducted with college students without disabilities and high school/college students with visual impairments. Parameters measured included reception accuracy, short-term memory retention, MathSpeak processing capacity and various rankings concerning the quality of synthetic speech renderings. All parameters measured showed statistically significant improvements when the algorithm was used. The algorithm improves the quality and information processing capacity of synthetic speech renderings of MathSpeak. This increases the capacity of individuals with print disabilities to perform mathematical activities and to successfully fulfill science, technology, engineering and mathematics academic and career objectives.

  5. A Semisupervised Support Vector Machines Algorithm for BCI Systems

    PubMed Central

    Qin, Jianzhao; Li, Yuanqing; Sun, Wei

    2007-01-01

    As an emerging technology, brain-computer interfaces (BCIs) bring us new communication interfaces which translate brain activities into control signals for devices like computers, robots, and so forth. In this study, we propose a semisupervised support vector machine (SVM) algorithm for brain-computer interface (BCI) systems, aiming at reducing the time-consuming training process. In this algorithm, we apply a semisupervised SVM for translating the features extracted from the electrical recordings of brain into control signals. This SVM classifier is built from a small labeled data set and a large unlabeled data set. Meanwhile, to reduce the time for training semisupervised SVM, we propose a batch-mode incremental learning method, which can also be easily applied to the online BCI systems. Additionally, it is suggested in many studies that common spatial pattern (CSP) is very effective in discriminating two different brain states. However, CSP needs a sufficient labeled data set. In order to overcome the drawback of CSP, we suggest a two-stage feature extraction method for the semisupervised learning algorithm. We apply our algorithm to two BCI experimental data sets. The offline data analysis results demonstrate the effectiveness of our algorithm. PMID:18368141

  6. Avoiding the Target Language with the Help of Google: Managing Language Choices in Gathering Information for EFL Project Work

    ERIC Educational Resources Information Center

    Musk, Nigel

    2014-01-01

    The integration of translation tools into the Google search engine has led to a huge increase in the visibility and accessibility of such tools, with potentially far-reaching implications for the English language classroom. Although these translation tools are the focus of this study, using them is in fact only one way in which English language…

  7. Factoring symmetric indefinite matrices on high-performance architectures

    NASA Technical Reports Server (NTRS)

    Jones, Mark T.; Patrick, Merrell L.

    1990-01-01

    The Bunch-Kaufman algorithm is the method of choice for factoring symmetric indefinite matrices in many applications. However, the Bunch-Kaufman algorithm does not take advantage of high-performance architectures such as the Cray Y-MP. Three new algorithms, based on Bunch-Kaufman factorization, that take advantage of such architectures are described. Results from an implementation of the third algorithm are presented.

  8. Translation norms for English and Spanish: The role of lexical variables, word class, and L2 proficiency in negotiating translation ambiguity

    PubMed Central

    Prior, Anat; MacWhinney, Brian; Kroll, Judith F.

    2014-01-01

    We present a set of translation norms for 670 English and 760 Spanish nouns, verbs and class ambiguous items that varied in their lexical properties in both languages, collected from 80 bilingual participants. Half of the words in each language received more than a single translation across participants. Cue word frequency and imageability were both negatively correlated with number of translations. Word class predicted number of translations: Nouns had fewer translations than did verbs, which had fewer translations than class-ambiguous items. The translation probability of specific responses was positively correlated with target word frequency and imageability, and with its form overlap with the cue word. Translation choice was modulated by L2 proficiency: Less proficient bilinguals tended to produce lower probability translations than more proficient bilinguals, but only in forward translation, from L1 to L2. These findings highlight the importance of translation ambiguity as a factor influencing bilingual representation and performance. The norms can also provide an important resource to assist researchers in the selection of experimental materials for studies of bilingual and monolingual language performance. These norms may be downloaded from www.psychonomic.org/archive. PMID:18183923

  9. ATR architecture for multisensor fusion

    NASA Astrophysics Data System (ADS)

    Hamilton, Mark K.; Kipp, Teresa A.

    1996-06-01

    The work of the U.S. Army Research Laboratory (ARL) in the area of algorithms for the identification of static military targets in single-frame electro-optical (EO) imagery has demonstrated great potential in platform-based automatic target identification (ATI). In this case, the term identification is used to mean being able to tell the difference between two military vehicles -- e.g., the M60 from the T72. ARL's work includes not only single-sensor forward-looking infrared (FLIR) ATI algorithms, but also multi-sensor ATI algorithms. We briefly discuss ARL's hybrid model-based/data-learning strategy for ATI, which represents a significant step forward in ATI algorithm design. For example, in the case of single sensor FLIR it allows the human algorithm designer to build directly into the algorithm knowledge that can be adequately modeled at this time, such as the target geometry which directly translates into the target silhouette in the FLIR realm. In addition, it allows structure that is not currently well understood (i.e., adequately modeled) to be incorporated through automated data-learning algorithms, which in a FLIR directly translates into an internal thermal target structure signature. This paper shows the direct applicability of this strategy to both the single-sensor FLIR as well as the multi-sensor FLIR and laser radar.

  10. Acceptance test of a commercially available software for automatic image registration of computed tomography (CT), magnetic resonance imaging (MRI) and 99mTc-methoxyisobutylisonitrile (MIBI) single-photon emission computed tomography (SPECT) brain images.

    PubMed

    Loi, Gianfranco; Dominietto, Marco; Manfredda, Irene; Mones, Eleonora; Carriero, Alessandro; Inglese, Eugenio; Krengli, Marco; Brambilla, Marco

    2008-09-01

    This note describes a method to characterize the performances of image fusion software (Syntegra) with respect to accuracy and robustness. Computed tomography (CT), magnetic resonance imaging (MRI), and single-photon emission computed tomography (SPECT) studies were acquired from two phantoms and 10 patients. Image registration was performed independently by two couples composed of one radiotherapist and one physicist by means of superposition of anatomic landmarks. Each couple performed jointly and saved the registration. The two solutions were averaged to obtain the gold standard registration. A new set of estimators was defined to identify translation and rotation errors in the coordinate axes, independently from point position in image field of view (FOV). Algorithms evaluated were local correlation (LC) for CT-MRI, normalized mutual information (MI) for CT-MRI, and CT-SPECT registrations. To evaluate accuracy, estimator values were compared to limiting values for the algorithms employed, both in phantoms and in patients. To evaluate robustness, different alignments between images taken from a sample patient were produced and registration errors determined. LC algorithm resulted accurate in CT-MRI registrations in phantoms, but exceeded limiting values in 3 of 10 patients. MI algorithm resulted accurate in CT-MRI and CT-SPECT registrations in phantoms; limiting values were exceeded in one case in CT-MRI and never reached in CT-SPECT registrations. Thus, the evaluation of robustness was restricted to the algorithm of MI both for CT-MRI and CT-SPECT registrations. The algorithm of MI proved to be robust: limiting values were not exceeded with translation perturbations up to 2.5 cm, rotation perturbations up to 10 degrees and roto-translational perturbation up to 3 cm and 5 degrees.

  11. Reinforcement learning and decision making in monkeys during a competitive game.

    PubMed

    Lee, Daeyeol; Conroy, Michelle L; McGreevy, Benjamin P; Barraclough, Dominic J

    2004-12-01

    Animals living in a dynamic environment must adjust their decision-making strategies through experience. To gain insights into the neural basis of such adaptive decision-making processes, we trained monkeys to play a competitive game against a computer in an oculomotor free-choice task. The animal selected one of two visual targets in each trial and was rewarded only when it selected the same target as the computer opponent. To determine how the animal's decision-making strategy can be affected by the opponent's strategy, the computer opponent was programmed with three different algorithms that exploited different aspects of the animal's choice and reward history. When the computer selected its targets randomly with equal probabilities, animals selected one of the targets more often, violating the prediction of probability matching, and their choices were systematically influenced by the choice history of the two players. When the computer exploited only the animal's choice history but not its reward history, animal's choice became more independent of its own choice history but was still related to the choice history of the opponent. This bias was substantially reduced, but not completely eliminated, when the computer used the choice history of both players in making its predictions. These biases were consistent with the predictions of reinforcement learning, suggesting that the animals sought optimal decision-making strategies using reinforcement learning algorithms.

  12. Variable selection and model choice in geoadditive regression models.

    PubMed

    Kneib, Thomas; Hothorn, Torsten; Tutz, Gerhard

    2009-06-01

    Model choice and variable selection are issues of major concern in practical regression analyses, arising in many biometric applications such as habitat suitability analyses, where the aim is to identify the influence of potentially many environmental conditions on certain species. We describe regression models for breeding bird communities that facilitate both model choice and variable selection, by a boosting algorithm that works within a class of geoadditive regression models comprising spatial effects, nonparametric effects of continuous covariates, interaction surfaces, and varying coefficients. The major modeling components are penalized splines and their bivariate tensor product extensions. All smooth model terms are represented as the sum of a parametric component and a smooth component with one degree of freedom to obtain a fair comparison between the model terms. A generic representation of the geoadditive model allows us to devise a general boosting algorithm that automatically performs model choice and variable selection.

  13. New inverse synthetic aperture radar algorithm for translational motion compensation

    NASA Astrophysics Data System (ADS)

    Bocker, Richard P.; Henderson, Thomas B.; Jones, Scott A.; Frieden, B. R.

    1991-10-01

    Inverse synthetic aperture radar (ISAR) is an imaging technique that shows real promise in classifying airborne targets in real time under all weather conditions. Over the past few years a large body of ISAR data has been collected and considerable effort has been expended to develop algorithms to form high-resolution images from this data. One important goal of workers in this field is to develop software that will do the best job of imaging under the widest range of conditions. The success of classifying targets using ISAR is predicated upon forming highly focused radar images of these targets. Efforts to develop highly focused imaging computer software have been challenging, mainly because the imaging depends on and is affected by the motion of the target, which in general is not precisely known. Specifically, the target generally has both rotational motion about some axis and translational motion as a whole with respect to the radar. The slant-range translational motion kinematic quantities must be first accurately estimated from the data and compensated before the image can be focused. Following slant-range motion compensation, the image is further focused by determining and correcting for target rotation. The use of the burst derivative measure is proposed as a means to improve the computational efficiency of currently used ISAR algorithms. The use of this measure in motion compensation ISAR algorithms for estimating the slant-range translational motion kinematic quantities of an uncooperative target is described. Preliminary tests have been performed on simulated as well as actual ISAR data using both a Sun 4 workstation and a parallel processing transputer array. Results indicate that the burst derivative measure gives significant improvement in processing speed over the traditional entropy measure now employed.

  14. Efficient Round-Trip Time Optimization for Replica-Exchange Enveloping Distribution Sampling (RE-EDS).

    PubMed

    Sidler, Dominik; Cristòfol-Clough, Michael; Riniker, Sereina

    2017-06-13

    Replica-exchange enveloping distribution sampling (RE-EDS) allows the efficient estimation of free-energy differences between multiple end-states from a single molecular dynamics (MD) simulation. In EDS, a reference state is sampled, which can be tuned by two types of parameters, i.e., smoothness parameters(s) and energy offsets, such that all end-states are sufficiently sampled. However, the choice of these parameters is not trivial. Replica exchange (RE) or parallel tempering is a widely applied technique to enhance sampling. By combining EDS with the RE technique, the parameter choice problem could be simplified and the challenge shifted toward an optimal distribution of the replicas in the smoothness-parameter space. The choice of a certain replica distribution can alter the sampling efficiency significantly. In this work, global round-trip time optimization (GRTO) algorithms are tested for the use in RE-EDS simulations. In addition, a local round-trip time optimization (LRTO) algorithm is proposed for systems with slowly adapting environments, where a reliable estimate for the round-trip time is challenging to obtain. The optimization algorithms were applied to RE-EDS simulations of a system of nine small-molecule inhibitors of phenylethanolamine N-methyltransferase (PNMT). The energy offsets were determined using our recently proposed parallel energy-offset (PEOE) estimation scheme. While the multistate GRTO algorithm yielded the best replica distribution for the ligands in water, the multistate LRTO algorithm was found to be the method of choice for the ligands in complex with PNMT. With this, the 36 alchemical free-energy differences between the nine ligands were calculated successfully from a single RE-EDS simulation 10 ns in length. Thus, RE-EDS presents an efficient method for the estimation of relative binding free energies.

  15. A Palmprint Recognition Algorithm Using Phase-Only Correlation

    NASA Astrophysics Data System (ADS)

    Ito, Koichi; Aoki, Takafumi; Nakajima, Hiroshi; Kobayashi, Koji; Higuchi, Tatsuo

    This paper presents a palmprint recognition algorithm using Phase-Only Correlation (POC). The use of phase components in 2D (two-dimensional) discrete Fourier transforms of palmprint images makes it possible to achieve highly robust image registration and matching. In the proposed algorithm, POC is used to align scaling, rotation and translation between two palmprint images, and evaluate similarity between them. Experimental evaluation using a palmprint image database clearly demonstrates efficient matching performance of the proposed algorithm.

  16. Simultaneous and semi-alternating projection algorithms for solving split equality problems.

    PubMed

    Dong, Qiao-Li; Jiang, Dan

    2018-01-01

    In this article, we first introduce two simultaneous projection algorithms for solving the split equality problem by using a new choice of the stepsize, and then propose two semi-alternating projection algorithms. The weak convergence of the proposed algorithms is analyzed under standard conditions. As applications, we extend the results to solve the split feasibility problem. Finally, a numerical example is presented to illustrate the efficiency and advantage of the proposed algorithms.

  17. Analysis of methods of processing of expert information by optimization of administrative decisions

    NASA Astrophysics Data System (ADS)

    Churakov, D. Y.; Tsarkova, E. G.; Marchenko, N. D.; Grechishnikov, E. V.

    2018-03-01

    In the real operation the measure definition methodology in case of expert estimation of quality and reliability of application-oriented software products is offered. In operation methods of aggregation of expert estimates on the example of a collective choice of an instrumental control projects in case of software development of a special purpose for needs of institutions are described. Results of operation of dialogue decision making support system are given an algorithm of the decision of the task of a choice on the basis of a method of the analysis of hierarchies and also. The developed algorithm can be applied by development of expert systems to the solution of a wide class of the tasks anyway connected to a multicriteria choice.

  18. Genetic algorithms using SISAL parallel programming language

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tejada, S.

    1994-05-06

    Genetic algorithms are a mathematical optimization technique developed by John Holland at the University of Michigan [1]. The SISAL programming language possesses many of the characteristics desired to implement genetic algorithms. SISAL is a deterministic, functional programming language which is inherently parallel. Because SISAL is functional and based on mathematical concepts, genetic algorithms can be efficiently translated into the language. Several of the steps involved in genetic algorithms, such as mutation, crossover, and fitness evaluation, can be parallelized using SISAL. In this paper I will l discuss the implementation and performance of parallel genetic algorithms in SISAL.

  19. Value-based decision-making battery: A Bayesian adaptive approach to assess impulsive and risky behavior.

    PubMed

    Pooseh, Shakoor; Bernhardt, Nadine; Guevara, Alvaro; Huys, Quentin J M; Smolka, Michael N

    2018-02-01

    Using simple mathematical models of choice behavior, we present a Bayesian adaptive algorithm to assess measures of impulsive and risky decision making. Practically, these measures are characterized by discounting rates and are used to classify individuals or population groups, to distinguish unhealthy behavior, and to predict developmental courses. However, a constant demand for improved tools to assess these constructs remains unanswered. The algorithm is based on trial-by-trial observations. At each step, a choice is made between immediate (certain) and delayed (risky) options. Then the current parameter estimates are updated by the likelihood of observing the choice, and the next offers are provided from the indifference point, so that they will acquire the most informative data based on the current parameter estimates. The procedure continues for a certain number of trials in order to reach a stable estimation. The algorithm is discussed in detail for the delay discounting case, and results from decision making under risk for gains, losses, and mixed prospects are also provided. Simulated experiments using prescribed parameter values were performed to justify the algorithm in terms of the reproducibility of its parameters for individual assessments, and to test the reliability of the estimation procedure in a group-level analysis. The algorithm was implemented as an experimental battery to measure temporal and probability discounting rates together with loss aversion, and was tested on a healthy participant sample.

  20. 23 CFR Appendix B to Part 1240 - Procedures for Missing or Inadequate State-Submitted Information (Calendar Years 1996 and 1997)

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... (FARS) will be translated into estimated observed seat belt use rates using an algorithm that relates... 133, June, 1994. B. The algorithm is as follows: u = (−.221794 + √.049193 + .410769F) / .456410 Where... change in the FARS-based observed seat belt use rate (derived from the above algorithm) between the two...

  1. 23 CFR Appendix B to Part 1240 - Procedures for Missing or Inadequate State-Submitted Information (Calendar Years 1996 and 1997)

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... (FARS) will be translated into estimated observed seat belt use rates using an algorithm that relates... 133, June, 1994. B. The algorithm is as follows: u = (−.221794 + √.049193 + .410769F) / .456410 Where... change in the FARS-based observed seat belt use rate (derived from the above algorithm) between the two...

  2. 23 CFR Appendix B to Part 1240 - Procedures for Missing or Inadequate State-Submitted Information (Calendar Years 1996 and 1997)

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... (FARS) will be translated into estimated observed seat belt use rates using an algorithm that relates... 133, June, 1994. B. The algorithm is as follows: u = (−.221794 + √.049193 + .410769F) / .456410 Where... change in the FARS-based observed seat belt use rate (derived from the above algorithm) between the two...

  3. 23 CFR Appendix B to Part 1240 - Procedures for Missing or Inadequate State-Submitted Information (Calendar Years 1996 and 1997)

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... (FARS) will be translated into estimated observed seat belt use rates using an algorithm that relates... 133, June, 1994. B. The algorithm is as follows: u = (−.221794 + √.049193 + .410769F) / .456410 Where... change in the FARS-based observed seat belt use rate (derived from the above algorithm) between the two...

  4. 23 CFR Appendix B to Part 1240 - Procedures for Missing or Inadequate State-Submitted Information (Calendar Years 1996 and 1997)

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... (FARS) will be translated into estimated observed seat belt use rates using an algorithm that relates... 133, June, 1994. B. The algorithm is as follows: u = (−.221794 + √.049193 + .410769F) / .456410 Where... change in the FARS-based observed seat belt use rate (derived from the above algorithm) between the two...

  5. The NLO jet vertex in the small-cone approximation for kt and cone algorithms

    NASA Astrophysics Data System (ADS)

    Colferai, D.; Niccoli, A.

    2015-04-01

    We determine the jet vertex for Mueller-Navelet jets and forward jets in the small-cone approximation for two particular choices of jet algoritms: the kt algorithm and the cone algorithm. These choices are motivated by the extensive use of such algorithms in the phenomenology of jets. The differences with the original calculations of the small-cone jet vertex by Ivanov and Papa, which is found to be equivalent to a formerly algorithm proposed by Furman, are shown at both analytic and numerical level, and turn out to be sizeable. A detailed numerical study of the error introduced by the small-cone approximation is also presented, for various observables of phenomenological interest. For values of the jet "radius" R = 0 .5, the use of the small-cone approximation amounts to an error of about 5% at the level of cross section, while it reduces to less than 2% for ratios of distributions such as those involved in the measure of the azimuthal decorrelation of dijets.

  6. 77 FR 59444 - Self-Regulatory Organizations; Chicago Board Options Exchange, Incorporated; Notice of Filing and...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-09-27

    ... provides a ``menu'' of matching algorithms to choose from when executing incoming electronic orders. The menu format allows the Exchange to utilize different matching algorithms on a class-by-class basis. The menu includes, among other choices, the ultimate matching algorithm (``UMA''), as well as price-time...

  7. Techniques and Tools for Trustworthy Composition of Pre-Designed Embedded Software Components

    DTIC Science & Technology

    2012-07-01

    following option choices. 1. A plain vanilla pi-trie algorithm set to build the entire pi-trie. 2. A pi-trie algorithm filtered for positive prime...implicates only. 3. A plain vanilla pi-trie algorithm to build the entire pi-trie, but recognize variable-disjoint subformulas. 4. A pi-trie

  8. Numerical stability of the error diffusion concept

    NASA Astrophysics Data System (ADS)

    Weissbach, Severin; Wyrowski, Frank

    1992-10-01

    The error diffusion algorithm is an easy implementable mean to handle nonlinearities in signal processing, e.g. in picture binarization and coding of diffractive elements. The numerical stability of the algorithm depends on the choice of the diffusion weights. A criterion for the stability of the algorithm is presented and evaluated for some examples.

  9. Backfilling with guarantees granted upon job submission.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Leung, Vitus Joseph; Bunde, David P.; Lindsay, Alexander M.

    2011-01-01

    In this paper, we present scheduling algorithms that simultaneously support guaranteed starting times and favor jobs with system desired traits. To achieve the first of these goals, our algorithms keep a profile with potential starting times for every unfinished job and never move these starting times later, just as in Conservative Backfilling. To achieve the second, they exploit previously unrecognized flexibility in the handling of holes opened in this profile when jobs finish early. We find that, with one choice of job selection function, our algorithms can consistently yield a lower average waiting time than Conservative Backfilling while still providingmore » a guaranteed start time to each job as it arrives. In fact, in most cases, the algorithms give a lower average waiting time than the more aggressive EASY backfilling algorithm, which does not provide guaranteed start times. Alternately, with a different choice of job selection function, our algorithms can focus the benefit on the widest submitted jobs, the reason for the existence of parallel systems. In this case, these jobs experience significantly lower waiting time than Conservative Backfilling with minimal impact on other jobs.« less

  10. On the Impact of Localization and Density Control Algorithms in Target Tracking Applications for Wireless Sensor Networks

    PubMed Central

    Campos, Andre N.; Souza, Efren L.; Nakamura, Fabiola G.; Nakamura, Eduardo F.; Rodrigues, Joel J. P. C.

    2012-01-01

    Target tracking is an important application of wireless sensor networks. The networks' ability to locate and track an object is directed linked to the nodes' ability to locate themselves. Consequently, localization systems are essential for target tracking applications. In addition, sensor networks are often deployed in remote or hostile environments. Therefore, density control algorithms are used to increase network lifetime while maintaining its sensing capabilities. In this work, we analyze the impact of localization algorithms (RPE and DPE) and density control algorithms (GAF, A3 and OGDC) on target tracking applications. We adapt the density control algorithms to address the k-coverage problem. In addition, we analyze the impact of network density, residual integration with density control, and k-coverage on both target tracking accuracy and network lifetime. Our results show that DPE is a better choice for target tracking applications than RPE. Moreover, among the evaluated density control algorithms, OGDC is the best option among the three. Although the choice of the density control algorithm has little impact on the tracking precision, OGDC outperforms GAF and A3 in terms of tracking time. PMID:22969329

  11. Cone beam CT imaging with limited angle of projections and prior knowledge for volumetric verification of non-coplanar beam radiation therapy: a proof of concept study

    NASA Astrophysics Data System (ADS)

    Meng, Bowen; Xing, Lei; Han, Bin; Koong, Albert; Chang, Daniel; Cheng, Jason; Li, Ruijiang

    2013-11-01

    Non-coplanar beams are important for treatment of both cranial and noncranial tumors. Treatment verification of such beams with couch rotation/kicks, however, is challenging, particularly for the application of cone beam CT (CBCT). In this situation, only limited and unconventional imaging angles are feasible to avoid collision between the gantry, couch, patient, and on-board imaging system. The purpose of this work is to develop a CBCT verification strategy for patients undergoing non-coplanar radiation therapy. We propose an image reconstruction scheme that integrates a prior image constrained compressed sensing (PICCS) technique with image registration. Planning CT or CBCT acquired at the neutral position is rotated and translated according to the nominal couch rotation/translation to serve as the initial prior image. Here, the nominal couch movement is chosen to have a rotational error of 5° and translational error of 8 mm from the ground truth in one or more axes or directions. The proposed reconstruction scheme alternates between two major steps. First, an image is reconstructed using the PICCS technique implemented with total-variation minimization and simultaneous algebraic reconstruction. Second, the rotational/translational setup errors are corrected and the prior image is updated by applying rigid image registration between the reconstructed image and the previous prior image. The PICCS algorithm and rigid image registration are alternated iteratively until the registration results fall below a predetermined threshold. The proposed reconstruction algorithm is evaluated with an anthropomorphic digital phantom and physical head phantom. The proposed algorithm provides useful volumetric images for patient setup using projections with an angular range as small as 60°. It reduced the translational setup errors from 8 mm to generally <1 mm and the rotational setup errors from 5° to <1°. Compared with the PICCS algorithm alone, the integration of rigid registration significantly improved the reconstructed image quality, with a reduction of mostly 2-3 folds (up to 100) in root mean square image error. The proposed algorithm provides a remedy for solving the problem of non-coplanar CBCT reconstruction from limited angle of projections by combining the PICCS technique and rigid image registration in an iterative framework. In this proof of concept study, non-coplanar beams with couch rotations of 45° can be effectively verified with the CBCT technique.

  12. Parametric Bayesian priors and better choice of negative examples improve protein function prediction.

    PubMed

    Youngs, Noah; Penfold-Brown, Duncan; Drew, Kevin; Shasha, Dennis; Bonneau, Richard

    2013-05-01

    Computational biologists have demonstrated the utility of using machine learning methods to predict protein function from an integration of multiple genome-wide data types. Yet, even the best performing function prediction algorithms rely on heuristics for important components of the algorithm, such as choosing negative examples (proteins without a given function) or determining key parameters. The improper choice of negative examples, in particular, can hamper the accuracy of protein function prediction. We present a novel approach for choosing negative examples, using a parameterizable Bayesian prior computed from all observed annotation data, which also generates priors used during function prediction. We incorporate this new method into the GeneMANIA function prediction algorithm and demonstrate improved accuracy of our algorithm over current top-performing function prediction methods on the yeast and mouse proteomes across all metrics tested. Code and Data are available at: http://bonneaulab.bio.nyu.edu/funcprop.html

  13. Stable Atlas-based Mapped Prior (STAMP) machine-learning segmentation for multicenter large-scale MRI data.

    PubMed

    Kim, Eun Young; Magnotta, Vincent A; Liu, Dawei; Johnson, Hans J

    2014-09-01

    Machine learning (ML)-based segmentation methods are a common technique in the medical image processing field. In spite of numerous research groups that have investigated ML-based segmentation frameworks, there remains unanswered aspects of performance variability for the choice of two key components: ML algorithm and intensity normalization. This investigation reveals that the choice of those elements plays a major part in determining segmentation accuracy and generalizability. The approach we have used in this study aims to evaluate relative benefits of the two elements within a subcortical MRI segmentation framework. Experiments were conducted to contrast eight machine-learning algorithm configurations and 11 normalization strategies for our brain MR segmentation framework. For the intensity normalization, a Stable Atlas-based Mapped Prior (STAMP) was utilized to take better account of contrast along boundaries of structures. Comparing eight machine learning algorithms on down-sampled segmentation MR data, it was obvious that a significant improvement was obtained using ensemble-based ML algorithms (i.e., random forest) or ANN algorithms. Further investigation between these two algorithms also revealed that the random forest results provided exceptionally good agreement with manual delineations by experts. Additional experiments showed that the effect of STAMP-based intensity normalization also improved the robustness of segmentation for multicenter data sets. The constructed framework obtained good multicenter reliability and was successfully applied on a large multicenter MR data set (n>3000). Less than 10% of automated segmentations were recommended for minimal expert intervention. These results demonstrate the feasibility of using the ML-based segmentation tools for processing large amount of multicenter MR images. We demonstrated dramatically different result profiles in segmentation accuracy according to the choice of ML algorithm and intensity normalization chosen. Copyright © 2014 Elsevier Inc. All rights reserved.

  14. Network compensation for missing sensors

    NASA Technical Reports Server (NTRS)

    Ahumada, Albert J., Jr.; Mulligan, Jeffrey B.

    1991-01-01

    A network learning translation invariance algorithm to compute interpolation functions is presented. This algorithm with one fixed receptive field can construct a linear transformation compensating for gain changes, sensor position jitter, and sensor loss when there are enough remaining sensors to adequately sample the input images. However, when the images are undersampled and complete compensation is not possible, the algorithm need to be modified. For moderate sensor losses, the algorithm works if the transformation weight adjustment is restricted to the weights to output units affected by the loss.

  15. Array architectures for iterative algorithms

    NASA Technical Reports Server (NTRS)

    Jagadish, Hosagrahar V.; Rao, Sailesh K.; Kailath, Thomas

    1987-01-01

    Regular mesh-connected arrays are shown to be isomorphic to a class of so-called regular iterative algorithms. For a wide variety of problems it is shown how to obtain appropriate iterative algorithms and then how to translate these algorithms into arrays in a systematic fashion. Several 'systolic' arrays presented in the literature are shown to be specific cases of the variety of architectures that can be derived by the techniques presented here. These include arrays for Fourier Transform, Matrix Multiplication, and Sorting.

  16. Smart Phase Tuning in Microwave Photonic Integrated Circuits Toward Automated Frequency Multiplication by Design

    NASA Astrophysics Data System (ADS)

    Nabavi, N.

    2018-07-01

    The author investigates the monitoring methods for fine adjustment of the previously proposed on-chip architecture for frequency multiplication and translation of harmonics by design. Digital signal processing (DSP) algorithms are utilized to create an optimized microwave photonic integrated circuit functionality toward automated frequency multiplication. The implemented DSP algorithms are formed on discrete Fourier transform and optimization-based algorithms (Greedy and gradient-based algorithms), which are analytically derived and numerically compared based on the accuracy and speed of convergence criteria.

  17. Man With Dog and a Madeleine.

    ERIC Educational Resources Information Center

    Sharpe, Matthew

    2003-01-01

    Includes an interview with writer Lydia Davis. Discusses her definition of story, her use of endings, and her language choice. Provides an excerpt of her translation of Marcel Proust's "Swann's Way." (PM)

  18. An algorithmic approach to the brain biopsy--part I.

    PubMed

    Kleinschmidt-DeMasters, B K; Prayson, Richard A

    2006-11-01

    The formulation of appropriate differential diagnoses for a slide is essential to the practice of surgical pathology but can be particularly challenging for residents and fellows. Algorithmic flow charts can help the less experienced pathologist to systematically consider all possible choices and eliminate incorrect diagnoses. They can assist pathologists-in-training in developing orderly, sequential, and logical thinking skills when confronting difficult cases. To present an algorithmic flow chart as an approach to formulating differential diagnoses for lesions seen in surgical neuropathology. An algorithmic flow chart to be used in teaching residents. Algorithms are not intended to be final diagnostic answers on any given case. Algorithms do not substitute for training received from experienced mentors nor do they substitute for comprehensive reading by trainees of reference textbooks. Algorithmic flow diagrams can, however, direct the viewer to the correct spot in reference texts for further in-depth reading once they hone down their diagnostic choices to a smaller number of entities. The best feature of algorithms is that they remind the user to consider all possibilities on each case, even if they can be quickly eliminated from further consideration. In Part I, we assist the resident in learning how to handle brain biopsies in general and how to distinguish nonneoplastic lesions that mimic tumors from true neoplasms.

  19. An algorithmic approach to the brain biopsy--part II.

    PubMed

    Prayson, Richard A; Kleinschmidt-DeMasters, B K

    2006-11-01

    The formulation of appropriate differential diagnoses for a slide is essential to the practice of surgical pathology but can be particularly challenging for residents and fellows. Algorithmic flow charts can help the less experienced pathologist to systematically consider all possible choices and eliminate incorrect diagnoses. They can assist pathologists-in-training in developing orderly, sequential, and logical thinking skills when confronting difficult cases. To present an algorithmic flow chart as an approach to formulating differential diagnoses for lesions seen in surgical neuropathology. An algorithmic flow chart to be used in teaching residents. Algorithms are not intended to be final diagnostic answers on any given case. Algorithms do not substitute for training received from experienced mentors nor do they substitute for comprehensive reading by trainees of reference textbooks. Algorithmic flow diagrams can, however, direct the viewer to the correct spot in reference texts for further in-depth reading once they hone down their diagnostic choices to a smaller number of entities. The best feature of algorithms is that they remind the user to consider all possibilities on each case, even if they can be quickly eliminated from further consideration. In Part II, we assist the resident in arriving at the correct diagnosis for neuropathologic lesions containing granulomatous inflammation, macrophages, or abnormal blood vessels.

  20. TADSim: Discrete Event-based Performance Prediction for Temperature Accelerated Dynamics

    DOE PAGES

    Mniszewski, Susan M.; Junghans, Christoph; Voter, Arthur F.; ...

    2015-04-16

    Next-generation high-performance computing will require more scalable and flexible performance prediction tools to evaluate software--hardware co-design choices relevant to scientific applications and hardware architectures. Here, we present a new class of tools called application simulators—parameterized fast-running proxies of large-scale scientific applications using parallel discrete event simulation. Parameterized choices for the algorithmic method and hardware options provide a rich space for design exploration and allow us to quickly find well-performing software--hardware combinations. We demonstrate our approach with a TADSim simulator that models the temperature-accelerated dynamics (TAD) method, an algorithmically complex and parameter-rich member of the accelerated molecular dynamics (AMD) family ofmore » molecular dynamics methods. The essence of the TAD application is captured without the computational expense and resource usage of the full code. We accomplish this by identifying the time-intensive elements, quantifying algorithm steps in terms of those elements, abstracting them out, and replacing them by the passage of time. We use TADSim to quickly characterize the runtime performance and algorithmic behavior for the otherwise long-running simulation code. We extend TADSim to model algorithm extensions, such as speculative spawning of the compute-bound stages, and predict performance improvements without having to implement such a method. Validation against the actual TAD code shows close agreement for the evolution of an example physical system, a silver surface. Finally, focused parameter scans have allowed us to study algorithm parameter choices over far more scenarios than would be possible with the actual simulation. This has led to interesting performance-related insights and suggested extensions.« less

  1. Weather Radar Studies

    DTIC Science & Technology

    1988-03-31

    radar operation and data - collection activities, a large data -analysis effort has been under way in support of automatic wind-shear detection algorithm ...REDUCTION AND ALGORITHM DEVELOPMENT 49 A. General-Purpose Software 49 B. Concurrent Computer Systems 49 C. Sun Workstations 51 D. Radar Data Analysis 52...1. Algorithm Verification 52 2. Other Studies 53 3. Translations 54 4. Outside Distributions 55 E. Mesonet/LLWAS Data Analysis 55 1. 1985 Data 55 2

  2. Algorithm Diversity for Resilent Systems

    DTIC Science & Technology

    2016-06-27

    data structures. 15. SUBJECT TERMS computer security, software diversity, program transformation 16. SECURITY CLASSIFICATION OF: 17. LIMITATION OF 18...systematic method for transforming Datalog rules with general universal and existential quantification into efficient algorithms with precise complexity...worst case in the size of the ground rules. There are numerous choices during the transformation that lead to diverse algorithms and different

  3. Learning receptor positions from imperfectly known motions

    NASA Technical Reports Server (NTRS)

    Ahumada, Albert J., Jr.; Mulligan, Jeffrey B.

    1990-01-01

    An algorithm is described for learning image interpolation functions for sensor arrays whose sensor positions are somewhat disordered. The learning is based on failures of translation invariance, so it does not require knowledge of the images being presented to the visual system. Previously reported implementations of the method assumed the visual system to have precise knowledge of the translations. It is demonstrated that translation estimates computed from the imperfectly interpolated images can have enough accuracy to allow the learning process to converge to a correct interpolation.

  4. Communication: translational Brownian motion for particles of arbitrary shape.

    PubMed

    Cichocki, Bogdan; Ekiel-Jeżewska, Maria L; Wajnryb, Eligiusz

    2012-02-21

    A single Brownian particle of arbitrary shape is considered. The time-dependent translational mean square displacement W(t) of a reference point at this particle is evaluated from the Smoluchowski equation. It is shown that at times larger than the characteristic time scale of the rotational Brownian relaxation, the slope of W(t) becomes independent of the choice of a reference point. Moreover, it is proved that in the long-time limit, the slope of W(t) is determined uniquely by the trace of the translational-translational mobility matrix μ(tt) evaluated with respect to the hydrodynamic center of mobility. The result is applicable to dynamic light scattering measurements, which indeed are performed in the long-time limit. © 2012 American Institute of Physics

  5. Knowledge translation of research findings

    PubMed Central

    2012-01-01

    Background One of the most consistent findings from clinical and health services research is the failure to translate research into practice and policy. As a result of these evidence-practice and policy gaps, patients fail to benefit optimally from advances in healthcare and are exposed to unnecessary risks of iatrogenic harms, and healthcare systems are exposed to unnecessary expenditure resulting in significant opportunity costs. Over the last decade, there has been increasing international policy and research attention on how to reduce the evidence-practice and policy gap. In this paper, we summarise the current concepts and evidence to guide knowledge translation activities, defined as T2 research (the translation of new clinical knowledge into improved health). We structure the article around five key questions: what should be transferred; to whom should research knowledge be transferred; by whom should research knowledge be transferred; how should research knowledge be transferred; and, with what effect should research knowledge be transferred? Discussion We suggest that the basic unit of knowledge translation should usually be up-to-date systematic reviews or other syntheses of research findings. Knowledge translators need to identify the key messages for different target audiences and to fashion these in language and knowledge translation products that are easily assimilated by different audiences. The relative importance of knowledge translation to different target audiences will vary by the type of research and appropriate endpoints of knowledge translation may vary across different stakeholder groups. There are a large number of planned knowledge translation models, derived from different disciplinary, contextual (i.e., setting), and target audience viewpoints. Most of these suggest that planned knowledge translation for healthcare professionals and consumers is more likely to be successful if the choice of knowledge translation strategy is informed by an assessment of the likely barriers and facilitators. Although our evidence on the likely effectiveness of different strategies to overcome specific barriers remains incomplete, there is a range of informative systematic reviews of interventions aimed at healthcare professionals and consumers (i.e., patients, family members, and informal carers) and of factors important to research use by policy makers. Summary There is a substantial (if incomplete) evidence base to guide choice of knowledge translation activities targeting healthcare professionals and consumers. The evidence base on the effects of different knowledge translation approaches targeting healthcare policy makers and senior managers is much weaker but there are a profusion of innovative approaches that warrant further evaluation. PMID:22651257

  6. From MIMO-OFDM Algorithms to a Real-Time Wireless Prototype: A Systematic Matlab-to-Hardware Design Flow

    NASA Astrophysics Data System (ADS)

    Weijers, Jan-Willem; Derudder, Veerle; Janssens, Sven; Petré, Frederik; Bourdoux, André

    2006-12-01

    To assess the performance of forthcoming 4th generation wireless local area networks, the algorithmic functionality is usually modelled using a high-level mathematical software package, for instance, Matlab. In order to validate the modelling assumptions against the real physical world, the high-level functional model needs to be translated into a prototype. A systematic system design methodology proves very valuable, since it avoids, or, at least reduces, numerous design iterations. In this paper, we propose a novel Matlab-to-hardware design flow, which allows to map the algorithmic functionality onto the target prototyping platform in a systematic and reproducible way. The proposed design flow is partly manual and partly tool assisted. It is shown that the proposed design flow allows to use the same testbench throughout the whole design flow and avoids time-consuming and error-prone intermediate translation steps.

  7. PI-line-based image reconstruction in helical cone-beam computed tomography with a variable pitch.

    PubMed

    Zou, Yu; Pan, Xiaochuan; Xia, Dan; Wang, Ge

    2005-08-01

    Current applications of helical cone-beam computed tomography (CT) involve primarily a constant pitch where the translating speed of the table and the rotation speed of the source-detector remain constant. However, situations do exist where it may be more desirable to use a helical scan with a variable translating speed of the table, leading a variable pitch. One of such applications could arise in helical cone-beam CT fluoroscopy for the determination of vascular structures through real-time imaging of contrast bolus arrival. Most of the existing reconstruction algorithms have been developed only for helical cone-beam CT with constant pitch, including the backprojection-filtration (BPF) and filtered-backprojection (FBP) algorithms that we proposed previously. It is possible to generalize some of these algorithms to reconstruct images exactly for helical cone-beam CT with a variable pitch. In this work, we generalize our BPF and FBP algorithms to reconstruct images directly from data acquired in helical cone-beam CT with a variable pitch. We have also performed a preliminary numerical study to demonstrate and verify the generalization of the two algorithms. The results of the study confirm that our generalized BPF and FBP algorithms can yield exact reconstruction in helical cone-beam CT with a variable pitch. It should be pointed out that our generalized BPF algorithm is the only algorithm that is capable of reconstructing exactly region-of-interest image from data containing transverse truncations.

  8. Aspects of numerical and representational methods related to the finite-difference simulation of advective and dispersive transport of freshwater in a thin brackish aquifer

    USGS Publications Warehouse

    Merritt, M.L.

    1993-01-01

    The simulation of the transport of injected freshwater in a thin brackish aquifer, overlain and underlain by confining layers containing more saline water, is shown to be influenced by the choice of the finite-difference approximation method, the algorithm for representing vertical advective and dispersive fluxes, and the values assigned to parametric coefficients that specify the degree of vertical dispersion and molecular diffusion that occurs. Computed potable water recovery efficiencies will differ depending upon the choice of algorithm and approximation method, as will dispersion coefficients estimated based on the calibration of simulations to match measured data. A comparison of centered and backward finite-difference approximation methods shows that substantially different transition zones between injected and native waters are depicted by the different methods, and computed recovery efficiencies vary greatly. Standard and experimental algorithms and a variety of values for molecular diffusivity, transverse dispersivity, and vertical scaling factor were compared in simulations of freshwater storage in a thin brackish aquifer. Computed recovery efficiencies vary considerably, and appreciable differences are observed in the distribution of injected freshwater in the various cases tested. The results demonstrate both a qualitatively different description of transport using the experimental algorithms and the interrelated influences of molecular diffusion and transverse dispersion on simulated recovery efficiency. When simulating natural aquifer flow in cross-section, flushing of the aquifer occurred for all tested coefficient choices using both standard and experimental algorithms. ?? 1993.

  9. Comparison of probability statistics for automated ship detection in SAR imagery

    NASA Astrophysics Data System (ADS)

    Henschel, Michael D.; Rey, Maria T.; Campbell, J. W. M.; Petrovic, D.

    1998-12-01

    This paper discuses the initial results of a recent operational trial of the Ocean Monitoring Workstation's (OMW) ship detection algorithm which is essentially a Constant False Alarm Rate filter applied to Synthetic Aperture Radar data. The choice of probability distribution and methodologies for calculating scene specific statistics are discussed in some detail. An empirical basis for the choice of probability distribution used is discussed. We compare the results using a l-look, k-distribution function with various parameter choices and methods of estimation. As a special case of sea clutter statistics the application of a (chi) 2-distribution is also discussed. Comparisons are made with reference to RADARSAT data collected during the Maritime Command Operation Training exercise conducted in Atlantic Canadian Waters in June 1998. Reference is also made to previously collected statistics. The OMW is a commercial software suite that provides modules for automated vessel detection, oil spill monitoring, and environmental monitoring. This work has been undertaken to fine tune the OMW algorithm's, with special emphasis on the false alarm rate of each algorithm.

  10. Crowdsourcing seizure detection: algorithm development and validation on human implanted device recordings.

    PubMed

    Baldassano, Steven N; Brinkmann, Benjamin H; Ung, Hoameng; Blevins, Tyler; Conrad, Erin C; Leyde, Kent; Cook, Mark J; Khambhati, Ankit N; Wagenaar, Joost B; Worrell, Gregory A; Litt, Brian

    2017-06-01

    There exist significant clinical and basic research needs for accurate, automated seizure detection algorithms. These algorithms have translational potential in responsive neurostimulation devices and in automatic parsing of continuous intracranial electroencephalography data. An important barrier to developing accurate, validated algorithms for seizure detection is limited access to high-quality, expertly annotated seizure data from prolonged recordings. To overcome this, we hosted a kaggle.com competition to crowdsource the development of seizure detection algorithms using intracranial electroencephalography from canines and humans with epilepsy. The top three performing algorithms from the contest were then validated on out-of-sample patient data including standard clinical data and continuous ambulatory human data obtained over several years using the implantable NeuroVista seizure advisory system. Two hundred teams of data scientists from all over the world participated in the kaggle.com competition. The top performing teams submitted highly accurate algorithms with consistent performance in the out-of-sample validation study. The performance of these seizure detection algorithms, achieved using freely available code and data, sets a new reproducible benchmark for personalized seizure detection. We have also shared a 'plug and play' pipeline to allow other researchers to easily use these algorithms on their own datasets. The success of this competition demonstrates how sharing code and high quality data results in the creation of powerful translational tools with significant potential to impact patient care. © The Author (2017). Published by Oxford University Press on behalf of the Guarantors of Brain. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  11. Stochastic Matching and the Voluntary Nature of Choice

    PubMed Central

    Neuringer, Allen; Jensen, Greg; Piff, Paul

    2007-01-01

    Attempts to characterize voluntary behavior have been ongoing for thousands of years. We provide experimental evidence that judgments of volition are based upon distributions of responses in relation to obtained rewards. Participants watched as responses, said to be made by “actors,” appeared on a computer screen. The participant's task was to estimate how well each actor represented the voluntary choices emitted by a real person. In actuality, all actors' responses were generated by algorithms based on Baum's (1979) generalized matching function. We systematically varied the exponent values (sensitivity parameter) of these algorithms: some actors matched response proportions to received reinforcer proportions, others overmatched (predominantly chose the highest-valued alternative), and yet others undermatched (chose relatively equally among the alternatives). In each of five experiments, we found that the matching actor's responses were judged most closely to approximate voluntary choice. We found also that judgments of high volition depended upon stochastic (or probabilistic) generation. Thus, stochastic responses that match reinforcer proportions best represent voluntary human choice. PMID:17725049

  12. Neural correlates of strategic reasoning during competitive games.

    PubMed

    Seo, Hyojung; Cai, Xinying; Donahue, Christopher H; Lee, Daeyeol

    2014-10-17

    Although human and animal behaviors are largely shaped by reinforcement and punishment, choices in social settings are also influenced by information about the knowledge and experience of other decision-makers. During competitive games, monkeys increased their payoffs by systematically deviating from a simple heuristic learning algorithm and thereby countering the predictable exploitation by their computer opponent. Neurons in the dorsomedial prefrontal cortex (dmPFC) signaled the animal's recent choice and reward history that reflected the computer's exploitative strategy. The strength of switching signals in the dmPFC also correlated with the animal's tendency to deviate from the heuristic learning algorithm. Therefore, the dmPFC might provide control signals for overriding simple heuristic learning algorithms based on the inferred strategies of the opponent. Copyright © 2014, American Association for the Advancement of Science.

  13. Two Legendre-Dual-Petrov-Galerkin Algorithms for Solving the Integrated Forms of High Odd-Order Boundary Value Problems

    PubMed Central

    Abd-Elhameed, Waleed M.; Doha, Eid H.; Bassuony, Mahmoud A.

    2014-01-01

    Two numerical algorithms based on dual-Petrov-Galerkin method are developed for solving the integrated forms of high odd-order boundary value problems (BVPs) governed by homogeneous and nonhomogeneous boundary conditions. Two different choices of trial functions and test functions which satisfy the underlying boundary conditions of the differential equations and the dual boundary conditions are used for this purpose. These choices lead to linear systems with specially structured matrices that can be efficiently inverted, hence greatly reducing the cost. The various matrix systems resulting from these discretizations are carefully investigated, especially their complexities and their condition numbers. Numerical results are given to illustrate the efficiency of the proposed algorithms, and some comparisons with some other methods are made. PMID:24616620

  14. Landau singularities from the amplituhedron

    DOE PAGES

    Dennen, T.; Prlina, I.; Spradlin, M.; ...

    2017-06-28

    We propose a simple geometric algorithm for determining the complete set of branch points of amplitudes in planar N = 4 super-Yang-Mills theory directly from the amplituhedron, without resorting to any particular representation in terms of local Feynman integrals. This represents a step towards translating integrands directly into integrals. In particular, the algorithm provides information about the symbol alphabets of general amplitudes. We illustrate the algorithm applied to the one- and two-loop MHV amplitudes.

  15. I can't wait: Methods for measuring and moderating individual differences in impulsive choice.

    PubMed

    Peterson, Jennifer R; Hill, Catherine C; Marshall, Andrew T; Stuebing, Sarah L; Kirkpatrick, Kimberly

    2015-01-01

    Impulsive choice behavior occurs when individuals make choices without regard for future consequences. This behavior is often maladaptive and is a common symptom in many disorders, including drug abuse, compulsive gambling, and obesity. Several proposed mechanisms may influence impulsive choice behavior. These mechanisms provide a variety of pathways that may provide the basis for individual differences that are often evident when measuring choice behavior. This review provides an overview of these different pathways to impulsive choice, and the behavioral intervention strategies being developed to moderate impulsive choice. Because of the compelling link between impulsive choice behavior and the near-epidemic pervasiveness of obesity in the United States, we focus on the relationship between impulsive choice behavior and obesity as a test case for application of the multiple pathways approach. Choosing immediate gratification over healthier long term food choices is a contributing factor to the obesity crisis. Behavioral interventions can lead to more self controlled choices in a rat pre-clinical model, suggesting a possible gateway for translation to human populations. Designing and implementing effective impulsive choice interventions is crucial to improving the overall health and well-being of impulsive individuals.

  16. I can't wait: Methods for measuring and moderating individual differences in impulsive choice

    PubMed Central

    Peterson, Jennifer R.; Hill, Catherine C.; Marshall, Andrew T.; Stuebing, Sarah L.; Kirkpatrick, Kimberly

    2016-01-01

    Impulsive choice behavior occurs when individuals make choices without regard for future consequences. This behavior is often maladaptive and is a common symptom in many disorders, including drug abuse, compulsive gambling, and obesity. Several proposed mechanisms may influence impulsive choice behavior. These mechanisms provide a variety of pathways that may provide the basis for individual differences that are often evident when measuring choice behavior. This review provides an overview of these different pathways to impulsive choice, and the behavioral intervention strategies being developed to moderate impulsive choice. Because of the compelling link between impulsive choice behavior and the near-epidemic pervasiveness of obesity in the United States, we focus on the relationship between impulsive choice behavior and obesity as a test case for application of the multiple pathways approach. Choosing immediate gratification over healthier long term food choices is a contributing factor to the obesity crisis. Behavioral interventions can lead to more self controlled choices in a rat pre-clinical model, suggesting a possible gateway for translation to human populations. Designing and implementing effective impulsive choice interventions is crucial to improving the overall health and well-being of impulsive individuals. PMID:27695664

  17. Satellite image processing for precision agriculture and agroindustry using convolutional neural network and genetic algorithm

    NASA Astrophysics Data System (ADS)

    Firdaus; Arkeman, Y.; Buono, A.; Hermadi, I.

    2017-01-01

    Translating satellite imagery to a useful data for decision making during this time are usually done manually by human. In this research, we are going to translate satellite imagery by using artificial intelligence method specifically using convolutional neural network and genetic algorithm to become a useful data for decision making, especially for precision agriculture and agroindustry. In this research, we are focused on how to made a sustainable land use planning with 3 objectives. The first is maximizing economic factor. Second is minimizing CO2 emission and the last is minimizing land degradation. Results show that by using artificial intelligence method, can produced a good pareto optimum solutions in a short time.

  18. Transcultural Endocrinology: Adapting Type-2 Diabetes Guidelines on a Global Scale.

    PubMed

    Nieto-Martínez, Ramfis; González-Rivas, Juan P; Florez, Hermes; Mechanick, Jeffrey I

    2016-12-01

    Type-2 diabetes (T2D) needs to be prevented and treated effectively to reduce its burden and consequences. White papers, such as evidence-based clinical practice guidelines (CPG) and their more portable versions, clinical practice algorithms and clinical checklists, may improve clinical decision-making and diabetes outcomes. However, CPG are underused and poorly validated. Protocols that translate and implement these CPG are needed. This review presents the global dimension of T2D, details the importance of white papers in the transculturalization process, compares relevant international CPG, analyzes cultural variables, and summarizes translation strategies that can improve care. Specific protocols and algorithmic tools are provided. Copyright © 2016 Elsevier Inc. All rights reserved.

  19. Teaching Markov Chain Monte Carlo: Revealing the Basic Ideas behind the Algorithm

    ERIC Educational Resources Information Center

    Stewart, Wayne; Stewart, Sepideh

    2014-01-01

    For many scientists, researchers and students Markov chain Monte Carlo (MCMC) simulation is an important and necessary tool to perform Bayesian analyses. The simulation is often presented as a mathematical algorithm and then translated into an appropriate computer program. However, this can result in overlooking the fundamental and deeper…

  20. Object-oriented controlled-vocabulary translator using TRANSOFT + HyperPAD.

    PubMed

    Moore, G W; Berman, J J

    1991-01-01

    Automated coding of surgical pathology reports is demonstrated. This public-domain translation software operates on surgical pathology files, extracting diagnoses and assigning codes in a controlled medical vocabulary, such as SNOMED. Context-sensitive translation algorithms are employed, and syntactically correct diagnostic items are produced that are matched with controlled vocabulary. English-language surgical pathology reports, accessioned over one year at the Baltimore Veterans Affairs Medical Center, were translated. With an interface to a larger hospital information system, all natural language pathology reports are automatically rendered as topography and morphology codes. This translator frees the pathologist from the time-intensive task of personally coding each report, and may be used to flag certain diagnostic categories that require specific quality assurance actions.

  1. Object-oriented controlled-vocabulary translator using TRANSOFT + HyperPAD.

    PubMed Central

    Moore, G. W.; Berman, J. J.

    1991-01-01

    Automated coding of surgical pathology reports is demonstrated. This public-domain translation software operates on surgical pathology files, extracting diagnoses and assigning codes in a controlled medical vocabulary, such as SNOMED. Context-sensitive translation algorithms are employed, and syntactically correct diagnostic items are produced that are matched with controlled vocabulary. English-language surgical pathology reports, accessioned over one year at the Baltimore Veterans Affairs Medical Center, were translated. With an interface to a larger hospital information system, all natural language pathology reports are automatically rendered as topography and morphology codes. This translator frees the pathologist from the time-intensive task of personally coding each report, and may be used to flag certain diagnostic categories that require specific quality assurance actions. PMID:1807773

  2. MDTri: robust and efficient global mixed integer search of spaces of multiple ternary alloys: A DIRECT-inspired optimization algorithm for experimentally accessible computational material design

    DOE PAGES

    Graf, Peter A.; Billups, Stephen

    2017-07-24

    Computational materials design has suffered from a lack of algorithms formulated in terms of experimentally accessible variables. Here we formulate the problem of (ternary) alloy optimization at the level of choice of atoms and their composition that is normal for synthesists. Mathematically, this is a mixed integer problem where a candidate solution consists of a choice of three elements, and how much of each of them to use. This space has the natural structure of a set of equilateral triangles. We solve this problem by introducing a novel version of the DIRECT algorithm that (1) operates on equilateral triangles insteadmore » of rectangles and (2) works across multiple triangles. We demonstrate on a test case that the algorithm is both robust and efficient. Lastly, we offer an explanation of the efficacy of DIRECT -- specifically, its balance of global and local search -- by showing that 'potentially optimal rectangles' of the original algorithm are akin to the Pareto front of the 'multi-component optimization' of global and local search.« less

  3. MDTri: robust and efficient global mixed integer search of spaces of multiple ternary alloys: A DIRECT-inspired optimization algorithm for experimentally accessible computational material design

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Graf, Peter A.; Billups, Stephen

    Computational materials design has suffered from a lack of algorithms formulated in terms of experimentally accessible variables. Here we formulate the problem of (ternary) alloy optimization at the level of choice of atoms and their composition that is normal for synthesists. Mathematically, this is a mixed integer problem where a candidate solution consists of a choice of three elements, and how much of each of them to use. This space has the natural structure of a set of equilateral triangles. We solve this problem by introducing a novel version of the DIRECT algorithm that (1) operates on equilateral triangles insteadmore » of rectangles and (2) works across multiple triangles. We demonstrate on a test case that the algorithm is both robust and efficient. Lastly, we offer an explanation of the efficacy of DIRECT -- specifically, its balance of global and local search -- by showing that 'potentially optimal rectangles' of the original algorithm are akin to the Pareto front of the 'multi-component optimization' of global and local search.« less

  4. Knowledge in motion: The cultural politics of modern science translations in Arabic.

    PubMed

    Elshakry, Marwa S

    2008-12-01

    This essay looks at the problem of the global circulation of modem scientific knowledge by looking at science translations in modern Arabic. In the commercial centers of the late Ottoman Empire, emerging transnational networks lay behind the development of new communities of knowledge, many of which sought to break with old linguistic and literary norms to redefine the basis of their authority. Far from acting as neutral purveyors of "universal truths," scientific translations thus served as key instruments in this ongoing process of sociopolitical and epistemological transformation and mediation. Fierce debates over translators' linguistic strategies and choices involved deliberations over the character of language and the nature of "science" itself. They were also crucially shaped by such geopolitical factors as the rise of European imperialism and anticolonial nationalism in the region. The essay concludes by arguing for the need for greater attention to the local factors involved in the translation of scientific concepts across borders.

  5. Modeling Confidence Judgments, Response Times, and Multiple Choices in Decision Making: Recognition Memory and Motion Discrimination

    PubMed Central

    Ratcliff, Roger; Starns, Jeffrey J.

    2014-01-01

    Confidence in judgments is a fundamental aspect of decision making, and tasks that collect confidence judgments are an instantiation of multiple-choice decision making. We present a model for confidence judgments in recognition memory tasks that uses a multiple-choice diffusion decision process with separate accumulators of evidence for the different confidence choices. The accumulator that first reaches its decision boundary determines which choice is made. Five algorithms for accumulating evidence were compared, and one of them produced proportions of responses for each of the choices and full response time distributions for each choice that closely matched empirical data. With this algorithm, an increase in the evidence in one accumulator is accompanied by a decrease in the others so that the total amount of evidence in the system is constant. Application of the model to the data from an earlier experiment (Ratcliff, McKoon, & Tindall, 1994) uncovered a relationship between the shapes of z-transformed receiver operating characteristics and the behavior of response time distributions. Both are explained in the model by the behavior of the decision boundaries. For generality, we also applied the decision model to a 3-choice motion discrimination task and found it accounted for data better than a competing class of models. The confidence model presents a coherent account of confidence judgments and response time that cannot be explained with currently popular signal detection theory analyses or dual-process models of recognition. PMID:23915088

  6. A mass graph-based approach for the identification of modified proteoforms using top-down tandem mass spectra

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kou, Qiang; Wu, Si; Tolić, Nikola

    Motivation: Although proteomics has rapidly developed in the past decade, researchers are still in the early stage of exploring the world of complex proteoforms, which are protein products with various primary structure alterations resulting from gene mutations, alternative splicing, post-translational modifications, and other biological processes. Proteoform identification is essential to mapping proteoforms to their biological functions as well as discovering novel proteoforms and new protein functions. Top-down mass spectrometry is the method of choice for identifying complex proteoforms because it provides a “bird’s eye view” of intact proteoforms. The combinatorial explosion of various alterations on a protein may result inmore » billions of possible proteoforms, making proteoform identification a challenging computational problem. Results: We propose a new data structure, called the mass graph, for efficient representation of proteoforms and design mass graph alignment algorithms. We developed TopMG, a mass graph-based software tool for proteoform identification by top-down mass spectrometry. Experiments on top-down mass spectrometry data sets showed that TopMG outperformed existing methods in identifying complex proteoforms.« less

  7. Referential Choice: Predictability and Its Limits

    PubMed Central

    Kibrik, Andrej A.; Khudyakova, Mariya V.; Dobrov, Grigory B.; Linnik, Anastasia; Zalmanov, Dmitrij A.

    2016-01-01

    We report a study of referential choice in discourse production, understood as the choice between various types of referential devices, such as pronouns and full noun phrases. Our goal is to predict referential choice, and to explore to what extent such prediction is possible. Our approach to referential choice includes a cognitively informed theoretical component, corpus analysis, machine learning methods and experimentation with human participants. Machine learning algorithms make use of 25 factors, including referent’s properties (such as animacy and protagonism), the distance between a referential expression and its antecedent, the antecedent’s syntactic role, and so on. Having found the predictions of our algorithm to coincide with the original almost 90% of the time, we hypothesized that fully accurate prediction is not possible because, in many situations, more than one referential option is available. This hypothesis was supported by an experimental study, in which participants answered questions about either the original text in the corpus, or about a text modified in accordance with the algorithm’s prediction. Proportions of correct answers to these questions, as well as participants’ rating of the questions’ difficulty, suggested that divergences between the algorithm’s prediction and the original referential device in the corpus occur overwhelmingly in situations where the referential choice is not categorical. PMID:27721800

  8. SU-E-T-465: Dose Calculation Method for Dynamic Tumor Tracking Using a Gimbal-Mounted Linac

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sugimoto, S; Inoue, T; Kurokawa, C

    Purpose: Dynamic tumor tracking using the gimbal-mounted linac (Vero4DRT, Mitsubishi Heavy Industries, Ltd., Japan) has been available when respiratory motion is significant. The irradiation accuracy of the dynamic tumor tracking has been reported to be excellent. In addition to the irradiation accuracy, a fast and accurate dose calculation algorithm is needed to validate the dose distribution in the presence of respiratory motion because the multiple phases of it have to be considered. A modification of dose calculation algorithm is necessary for the gimbal-mounted linac due to the degrees of freedom of gimbal swing. The dose calculation algorithm for the gimbalmore » motion was implemented using the linear transformation between coordinate systems. Methods: The linear transformation matrices between the coordinate systems with and without gimbal swings were constructed using the combination of translation and rotation matrices. The coordinate system where the radiation source is at the origin and the beam axis along the z axis was adopted. The transformation can be divided into the translation from the radiation source to the gimbal rotation center, the two rotations around the center relating to the gimbal swings, and the translation from the gimbal center to the radiation source. After operating the transformation matrix to the phantom or patient image, the dose calculation can be performed as the no gimbal swing. The algorithm was implemented in the treatment planning system, PlanUNC (University of North Carolina, NC). The convolution/superposition algorithm was used. The dose calculations with and without gimbal swings were performed for the 3 × 3 cm{sup 2} field with the grid size of 5 mm. Results: The calculation time was about 3 minutes per beam. No significant additional time due to the gimbal swing was observed. Conclusions: The dose calculation algorithm for the finite gimbal swing was implemented. The calculation time was moderate.« less

  9. Correction of 3D rigid body motion in fMRI time series by independent estimation of rotational and translational effects in k-space.

    PubMed

    Costagli, Mauro; Waggoner, R Allen; Ueno, Kenichi; Tanaka, Keiji; Cheng, Kang

    2009-04-15

    In functional magnetic resonance imaging (fMRI), even subvoxel motion dramatically corrupts the blood oxygenation level-dependent (BOLD) signal, invalidating the assumption that intensity variation in time is primarily due to neuronal activity. Thus, correction of the subject's head movements is a fundamental step to be performed prior to data analysis. Most motion correction techniques register a series of volumes assuming that rigid body motion, characterized by rotational and translational parameters, occurs. Unlike the most widely used applications for fMRI data processing, which correct motion in the image domain by numerically estimating rotational and translational components simultaneously, the algorithm presented here operates in a three-dimensional k-space, to decouple and correct rotations and translations independently, offering new ways and more flexible procedures to estimate the parameters of interest. We developed an implementation of this method in MATLAB, and tested it on both simulated and experimental data. Its performance was quantified in terms of square differences and center of mass stability across time. Our data show that the algorithm proposed here successfully corrects for rigid-body motion, and its employment in future fMRI studies is feasible and promising.

  10. Fast super-resolution with affine motion using an adaptive Wiener filter and its application to airborne imaging.

    PubMed

    Hardie, Russell C; Barnard, Kenneth J; Ordonez, Raul

    2011-12-19

    Fast nonuniform interpolation based super-resolution (SR) has traditionally been limited to applications with translational interframe motion. This is in part because such methods are based on an underlying assumption that the warping and blurring components in the observation model commute. For translational motion this is the case, but it is not true in general. This presents a problem for applications such as airborne imaging where translation may be insufficient. Here we present a new Fourier domain analysis to show that, for many image systems, an affine warping model with limited zoom and shear approximately commutes with the point spread function when diffraction effects are modeled. Based on this important result, we present a new fast adaptive Wiener filter (AWF) SR algorithm for non-translational motion and study its performance with affine motion. The fast AWF SR method employs a new smart observation window that allows us to precompute all the needed filter weights for any type of motion without sacrificing much of the full performance of the AWF. We evaluate the proposed algorithm using simulated data and real infrared airborne imagery that contains a thermal resolution target allowing for objective resolution analysis.

  11. A Bio-inspired Collision Avoidance Model Based on Spatial Information Derived from Motion Detectors Leads to Common Routes

    PubMed Central

    Bertrand, Olivier J. N.; Lindemann, Jens P.; Egelhaaf, Martin

    2015-01-01

    Avoiding collisions is one of the most basic needs of any mobile agent, both biological and technical, when searching around or aiming toward a goal. We propose a model of collision avoidance inspired by behavioral experiments on insects and by properties of optic flow on a spherical eye experienced during translation, and test the interaction of this model with goal-driven behavior. Insects, such as flies and bees, actively separate the rotational and translational optic flow components via behavior, i.e. by employing a saccadic strategy of flight and gaze control. Optic flow experienced during translation, i.e. during intersaccadic phases, contains information on the depth-structure of the environment, but this information is entangled with that on self-motion. Here, we propose a simple model to extract the depth structure from translational optic flow by using local properties of a spherical eye. On this basis, a motion direction of the agent is computed that ensures collision avoidance. Flying insects are thought to measure optic flow by correlation-type elementary motion detectors. Their responses depend, in addition to velocity, on the texture and contrast of objects and, thus, do not measure the velocity of objects veridically. Therefore, we initially used geometrically determined optic flow as input to a collision avoidance algorithm to show that depth information inferred from optic flow is sufficient to account for collision avoidance under closed-loop conditions. Then, the collision avoidance algorithm was tested with bio-inspired correlation-type elementary motion detectors in its input. Even then, the algorithm led successfully to collision avoidance and, in addition, replicated the characteristics of collision avoidance behavior of insects. Finally, the collision avoidance algorithm was combined with a goal direction and tested in cluttered environments. The simulated agent then showed goal-directed behavior reminiscent of components of the navigation behavior of insects. PMID:26583771

  12. Evaluation of registration, compression and classification algorithms. Volume 1: Results

    NASA Technical Reports Server (NTRS)

    Jayroe, R.; Atkinson, R.; Callas, L.; Hodges, J.; Gaggini, B.; Peterson, J.

    1979-01-01

    The registration, compression, and classification algorithms were selected on the basis that such a group would include most of the different and commonly used approaches. The results of the investigation indicate clearcut, cost effective choices for registering, compressing, and classifying multispectral imagery.

  13. Classification of adaptive memetic algorithms: a comparative study.

    PubMed

    Ong, Yew-Soon; Lim, Meng-Hiot; Zhu, Ning; Wong, Kok-Wai

    2006-02-01

    Adaptation of parameters and operators represents one of the recent most important and promising areas of research in evolutionary computations; it is a form of designing self-configuring algorithms that acclimatize to suit the problem in hand. Here, our interests are on a recent breed of hybrid evolutionary algorithms typically known as adaptive memetic algorithms (MAs). One unique feature of adaptive MAs is the choice of local search methods or memes and recent studies have shown that this choice significantly affects the performances of problem searches. In this paper, we present a classification of memes adaptation in adaptive MAs on the basis of the mechanism used and the level of historical knowledge on the memes employed. Then the asymptotic convergence properties of the adaptive MAs considered are analyzed according to the classification. Subsequently, empirical studies on representatives of adaptive MAs for different type-level meme adaptations using continuous benchmark problems indicate that global-level adaptive MAs exhibit better search performances. Finally we conclude with some promising research directions in the area.

  14. STRATOP: A Model for Designing Effective Product and Communication Strategies. Paper No. 470.

    ERIC Educational Resources Information Center

    Pessemier, Edgar A.

    The STRATOP algorithm was developed to help planners and proponents find and test effectively designed choice objects and communication strategies. Choice objects can range from complex social, scientific, military, or educational alternatives to simple economic alternatives between assortments of branded convenience goods. Two classes of measured…

  15. Stochastic Models of Polymer Systems

    DTIC Science & Technology

    2016-01-01

    SECURITY CLASSIFICATION OF: The stochastic gradient decent algorithm is the now the "algorithm of choice" for very large machine learning problems...information about the behavior of the algorithm. At the same time, we were also able to formulate various acceleration techniques in precise math terms... gradient decent, REPORT DOCUMENTATION PAGE 11. SPONSOR/MONITOR’S REPORT NUMBER(S) 10. SPONSOR/MONITOR’S ACRONYM(S) ARO 8. PERFORMING

  16. Clustering algorithm for determining community structure in large networks

    NASA Astrophysics Data System (ADS)

    Pujol, Josep M.; Béjar, Javier; Delgado, Jordi

    2006-07-01

    We propose an algorithm to find the community structure in complex networks based on the combination of spectral analysis and modularity optimization. The clustering produced by our algorithm is as accurate as the best algorithms on the literature of modularity optimization; however, the main asset of the algorithm is its efficiency. The best match for our algorithm is Newman’s fast algorithm, which is the reference algorithm for clustering in large networks due to its efficiency. When both algorithms are compared, our algorithm outperforms the fast algorithm both in efficiency and accuracy of the clustering, in terms of modularity. Thus, the results suggest that the proposed algorithm is a good choice to analyze the community structure of medium and large networks in the range of tens and hundreds of thousand vertices.

  17. Does age matter? The impact of rodent age on study outcomes.

    PubMed

    Jackson, Samuel J; Andrews, Nick; Ball, Doug; Bellantuono, Ilaria; Gray, James; Hachoumi, Lamia; Holmes, Alan; Latcham, Judy; Petrie, Anja; Potter, Paul; Rice, Andrew; Ritchie, Alison; Stewart, Michelle; Strepka, Carol; Yeoman, Mark; Chapman, Kathryn

    2017-04-01

    Rodent models produce data which underpin biomedical research and non-clinical drug trials, but translation from rodents into successful clinical outcomes is often lacking. There is a growing body of evidence showing that improving experimental design is key to improving the predictive nature of rodent studies and reducing the number of animals used in research. Age, one important factor in experimental design, is often poorly reported and can be overlooked. The authors conducted a survey to assess the age used for a range of models, and the reasoning for age choice. From 297 respondents providing 611 responses, researchers reported using rodents most often in the 6-20 week age range regardless of the biology being studied. The age referred to as 'adult' by respondents varied between six and 20 weeks. Practical reasons for the choice of rodent age were frequently given, with increased cost associated with using older animals and maintenance of historical data comparability being two important limiting factors. These results highlight that choice of age is inconsistent across the research community and often not based on the development or cellular ageing of the system being studied. This could potentially result in decreased scientific validity and increased experimental variability. In some cases the use of older animals may be beneficial. Increased scientific rigour in the choice of the age of rodent may increase the translation of rodent models to humans.

  18. Does age matter? The impact of rodent age on study outcomes

    PubMed Central

    Andrews, Nick; Ball, Doug; Bellantuono, Ilaria; Gray, James; Hachoumi, Lamia; Holmes, Alan; Latcham, Judy; Petrie, Anja; Potter, Paul; Rice, Andrew; Ritchie, Alison; Stewart, Michelle; Strepka, Carol; Yeoman, Mark; Chapman, Kathryn

    2016-01-01

    Rodent models produce data which underpin biomedical research and non-clinical drug trials, but translation from rodents into successful clinical outcomes is often lacking. There is a growing body of evidence showing that improving experimental design is key to improving the predictive nature of rodent studies and reducing the number of animals used in research. Age, one important factor in experimental design, is often poorly reported and can be overlooked. The authors conducted a survey to assess the age used for a range of models, and the reasoning for age choice. From 297 respondents providing 611 responses, researchers reported using rodents most often in the 6–20 week age range regardless of the biology being studied. The age referred to as ‘adult’ by respondents varied between six and 20 weeks. Practical reasons for the choice of rodent age were frequently given, with increased cost associated with using older animals and maintenance of historical data comparability being two important limiting factors. These results highlight that choice of age is inconsistent across the research community and often not based on the development or cellular ageing of the system being studied. This could potentially result in decreased scientific validity and increased experimental variability. In some cases the use of older animals may be beneficial. Increased scientific rigour in the choice of the age of rodent may increase the translation of rodent models to humans. PMID:27307423

  19. Evolving from academic to academic entrepreneur: overcoming barriers to scientific progress and finance.

    PubMed

    Miller, Andrew D

    2016-07-01

    The overall goal of my career as an academic chemist has always been the design and creation of advanced therapeutics and diagnostics that address unmet medical need in the management of chronic diseases. Realising this goal has been an immensely difficult process involving multidisciplinary problem-driven research at the chemistry-biology-medicine interfaces. With success in the laboratory, I started seriously to question the value of remaining an academic whose career is spent in the pursuit of knowledge and understanding alone without making any significant effort to translate knowledge and understanding gained into products of genuine utility for public benefit. Therefore, I elected by choice to become an academic entrepreneur, seeking opportunities wherever possible for the translation of the best of my personal and collaborative academic research work into potentially valuable and useful products. This choice has brought with it many unexpected difficulties and challenges. Nevertheless, progress bas been made and sufficient learnt to suggest that this would be an appropriate moment to take stock and provide some personal reflections on what it takes to design and create advanced therapeutics and diagnostics in the laboratory then seek to develop, innovate and translate the best towards market.

  20. State-based verification of RTCP-nets with nuXmv

    NASA Astrophysics Data System (ADS)

    Biernacka, Agnieszka; Biernacki, Jerzy; Szpyrka, Marcin

    2015-12-01

    The paper deals with an algorithm of translation of RTCP-nets' (real-time coloured Petri nets) coverability graphs into nuXmv state machines. The approach enables users to verify RTCP-nets with model checking techniques provided by the nuXmv tool. Full details of the algorithm are presented and an illustrative example of the approach usefulness is provided.

  1. Optimization of digital breast tomosynthesis (DBT) acquisition parameters for human observers: effect of reconstruction algorithms

    NASA Astrophysics Data System (ADS)

    Zeng, Rongping; Badano, Aldo; Myers, Kyle J.

    2017-04-01

    We showed in our earlier work that the choice of reconstruction methods does not affect the optimization of DBT acquisition parameters (angular span and number of views) using simulated breast phantom images in detecting lesions with a channelized Hotelling observer (CHO). In this work we investigate whether the model-observer based conclusion is valid when using humans to interpret images. We used previously generated DBT breast phantom images and recruited human readers to find the optimal geometry settings associated with two reconstruction algorithms, filtered back projection (FBP) and simultaneous algebraic reconstruction technique (SART). The human reader results show that image quality trends as a function of the acquisition parameters are consistent between FBP and SART reconstructions. The consistent trends confirm that the optimization of DBT system geometry is insensitive to the choice of reconstruction algorithm. The results also show that humans perform better in SART reconstructed images than in FBP reconstructed images. In addition, we applied CHOs with three commonly used channel models, Laguerre-Gauss (LG) channels, square (SQR) channels and sparse difference-of-Gaussian (sDOG) channels. We found that LG channels predict human performance trends better than SQR and sDOG channel models for the task of detecting lesions in tomosynthesis backgrounds. Overall, this work confirms that the choice of reconstruction algorithm is not critical for optimizing DBT system acquisition parameters.

  2. Translational/Personalized Medicine, Pharmaco/Surgico/Radiogenomics, Lymphatic Spread of Cancer, and Medical Ignoromes

    PubMed Central

    WITTE, MARLYS H.

    2014-01-01

    In the elusive quest for “personalized” cancer treatments based on pharmacogenomics, diverse challenges must be overcome: questionable validity of “molecular models of life,” obstacles to bidirectional translation of scientific advances from bench to bedside to community, and limitations of bioinformatics to recognize and deal with “ignoramics/ignoromes” (expanding unknowns in cancer biology, theranostics, and therapeutic choices). These considerations apply to lymphatic system functioning—lymphatic vessels, lymph, lymph nodes, and lymphocytes—in diseases like cancer. PMID:21480242

  3. Translating statistical images to text summaries for partially sighted persons on mobile devices: iconic image maps approach

    NASA Astrophysics Data System (ADS)

    Williams, Godfried B.

    2005-03-01

    This paper attempts to demonstrate a novel based idea for transforming statistical image data to text using autoassociative and unsupervised artificial neural network and iconic image maps using the shape and texture genetic algorithm, underlying concepts translating the image data to text. Full details of experiments could be assessed at http://www.uel.ac.uk/seis/applications/.

  4. Expert networks in CLIPS

    NASA Technical Reports Server (NTRS)

    Hruska, S. I.; Dalke, A.; Ferguson, J. J.; Lacher, R. C.

    1991-01-01

    Rule-based expert systems may be structurally and functionally mapped onto a special class of neural networks called expert networks. This mapping lends itself to adaptation of connectionist learning strategies for the expert networks. A parsing algorithm to translate C Language Integrated Production System (CLIPS) rules into a network of interconnected assertion and operation nodes has been developed. The translation of CLIPS rules to an expert network and back again is illustrated. Measures of uncertainty similar to those rules in MYCIN-like systems are introduced into the CLIPS system and techniques for combining and hiring nodes in the network based on rule-firing with these certainty factors in the expert system are presented. Several learning algorithms are under study which automate the process of attaching certainty factors to rules.

  5. Should the parameters of a BCI translation algorithm be continually adapted?

    PubMed

    McFarland, Dennis J; Sarnacki, William A; Wolpaw, Jonathan R

    2011-07-15

    People with or without motor disabilities can learn to control sensorimotor rhythms (SMRs) recorded from the scalp to move a computer cursor in one or more dimensions or can use the P300 event-related potential as a control signal to make discrete selections. Data collected from individuals using an SMR-based or P300-based BCI were evaluated offline to estimate the impact on performance of continually adapting the parameters of the translation algorithm during BCI operation. The performance of the SMR-based BCI was enhanced by adaptive updating of the feature weights or adaptive normalization of the features. In contrast, P300 performance did not benefit from either of these procedures. Copyright © 2011 Elsevier B.V. All rights reserved.

  6. The Dostoevsky Machine in Georgetown: scientific translation in the Cold War.

    PubMed

    Gordin, Michael D

    2016-04-01

    Machine Translation (MT) is now ubiquitous in discussions of translation. The roots of this phenomenon - first publicly unveiled in the so-called 'Georgetown-IBM Experiment' on 9 January 1954 - displayed not only the technological utopianism still associated with dreams of a universal computer translator, but was deeply enmeshed in the political pressures of the Cold War and a dominating conception of scientific writing as both the goal of machine translation as well as its method. Machine translation was created, in part, as a solution to a perceived crisis sparked by the massive expansion of Soviet science. Scientific prose was also perceived as linguistically simpler, and so served as the model for how to turn a language into a series of algorithms. This paper follows the rise of the Georgetown program - the largest single program in the world - from 1954 to the (as it turns out, temporary) collapse of MT in 1964.

  7. Exact Fan-Beam Reconstruction With Arbitrary Object Translations and Truncated Projections

    NASA Astrophysics Data System (ADS)

    Hoskovec, Jan; Clackdoyle, Rolf; Desbat, Laurent; Rit, Simon

    2016-06-01

    This article proposes a new method for reconstructing two-dimensional (2D) computed tomography (CT) images from truncated and motion contaminated sinograms. The type of motion considered here is a sequence of rigid translations which are assumed to be known. The algorithm first identifies the sufficiency of angular coverage in each 2D point of the CT image to calculate the Hilbert transform from the local “virtual” trajectory which accounts for the motion and the truncation. By taking advantage of data redundancy in the full circular scan, our method expands the reconstructible region beyond the one obtained with chord-based methods. The proposed direct reconstruction algorithm is based on the Differentiated Back-Projection with Hilbert filtering (DBP-H). The motion is taken into account during backprojection which is the first step of our direct reconstruction, before taking the derivatives and inverting the finite Hilbert transform. The algorithm has been tested in a proof-of-concept study on Shepp-Logan phantom simulations with several motion cases and detector sizes.

  8. A novel rotational matrix and translation vector algorithm: geometric accuracy for augmented reality in oral and maxillofacial surgeries.

    PubMed

    Murugesan, Yahini Prabha; Alsadoon, Abeer; Manoranjan, Paul; Prasad, P W C

    2018-06-01

    Augmented reality-based surgeries have not been successfully implemented in oral and maxillofacial areas due to limitations in geometric accuracy and image registration. This paper aims to improve the accuracy and depth perception of the augmented video. The proposed system consists of a rotational matrix and translation vector algorithm to reduce the geometric error and improve the depth perception by including 2 stereo cameras and a translucent mirror in the operating room. The results on the mandible/maxilla area show that the new algorithm improves the video accuracy by 0.30-0.40 mm (in terms of overlay error) and the processing rate to 10-13 frames/s compared to 7-10 frames/s in existing systems. The depth perception increased by 90-100 mm. The proposed system concentrates on reducing the geometric error. Thus, this study provides an acceptable range of accuracy with a shorter operating time, which provides surgeons with a smooth surgical flow. Copyright © 2018 John Wiley & Sons, Ltd.

  9. The Neuroscience of Consumer Choice

    PubMed Central

    Hsu, Ming; Yoon, Carolyn

    2015-01-01

    We review progress and challenges relating to scientific and applied goals of the nascent field of consumer neuroscience. Scientifically, substantial progress has been made in understanding the neurobiology of choice processes. Further advances, however, require researchers to begin clarifying the set of developmental and cognitive processes that shape and constrain choices. First, despite the centrality of preferences in theories of consumer choice, we still know little about where preferences come from and the underlying developmental processes. Second, the role of attention and memory processes in consumer choice remains poorly understood, despite importance ascribed to them in interpreting data from the field. The applied goal of consumer neuroscience concerns our ability to translate this understanding to augment prediction at the population level. Although the use of neuroscientific data for market-level predictions remains speculative, there is growing evidence of superiority in specific cases over existing market research techniques. PMID:26665152

  10. Probabilistic choice between symmetric disparities in motion stereo matching for a lateral navigation system

    NASA Astrophysics Data System (ADS)

    Ershov, Egor; Karnaukhov, Victor; Mozerov, Mikhail

    2016-02-01

    Two consecutive frames of a lateral navigation camera video sequence can be considered as an appropriate approximation to epipolar stereo. To overcome edge-aware inaccuracy caused by occlusion, we propose a model that matches the current frame to the next and to the previous ones. The positive disparity of matching to the previous frame has its symmetric negative disparity to the next frame. The proposed algorithm performs probabilistic choice for each matched pixel between the positive disparity and its symmetric disparity cost. A disparity map obtained by optimization over the cost volume composed of the proposed probabilistic choice is more accurate than the traditional left-to-right and right-to-left disparity maps cross-check. Also, our algorithm needs two times less computational operations per pixel than the cross-check technique. The effectiveness of our approach is demonstrated on synthetic data and real video sequences, with ground-truth value.

  11. Finite Element Approach for the Design of Control Algorithms for Vertical Fin Buffeting Using Strain Actuation

    DTIC Science & Technology

    2001-06-01

    Algorithms for Vertical Fin Buffeting Using Strain Actuation DISTRIBUTION: Approved for public release, distribution unlimited This paper is part of the...UNCLASSIFIED 8-1 Finite Element Approach for the Design of Control Algorithms for Vertical Fin Buffeting Using Strain Actuation Fred Nitzsche...groups), the disturbance (buffet load), and the two output variables (a choice among four Introduction accelerometers and five strain - gauge positions

  12. Deciphering mRNA Sequence Determinants of Protein Production Rate

    NASA Astrophysics Data System (ADS)

    Szavits-Nossan, Juraj; Ciandrini, Luca; Romano, M. Carmen

    2018-03-01

    One of the greatest challenges in biophysical models of translation is to identify coding sequence features that affect the rate of translation and therefore the overall protein production in the cell. We propose an analytic method to solve a translation model based on the inhomogeneous totally asymmetric simple exclusion process, which allows us to unveil simple design principles of nucleotide sequences determining protein production rates. Our solution shows an excellent agreement when compared to numerical genome-wide simulations of S. cerevisiae transcript sequences and predicts that the first 10 codons, which is the ribosome footprint length on the mRNA, together with the value of the initiation rate, are the main determinants of protein production rate under physiological conditions. Finally, we interpret the obtained analytic results based on the evolutionary role of the codons' choice for regulating translation rates and ribosome densities.

  13. Autonomous mechanism of internal choice estimate underlies decision inertia.

    PubMed

    Akaishi, Rei; Umeda, Kazumasa; Nagase, Asako; Sakai, Katsuyuki

    2014-01-08

    Our choice is influenced by choices we made in the past, but the mechanism responsible for the choice bias remains elusive. Here we show that the history-dependent choice bias can be explained by an autonomous learning rule whereby an estimate of the likelihood of a choice to be made is updated in each trial by comparing between the actual and expected choices. We found that in perceptual decision making without performance feedback, a decision on an ambiguous stimulus is repeated on the subsequent trial more often than a decision on a salient stimulus. This inertia of decision was not accounted for by biases in motor response, sensory processing, or attention. The posterior cingulate cortex and frontal eye field represent choice prediction error and choice estimate in the learning algorithm, respectively. Interactions between the two regions during the intertrial interval are associated with decision inertia on a subsequent trial. Copyright © 2014 Elsevier Inc. All rights reserved.

  14. Writing Multiple Choice Outcome Questions to Assess Knowledge and Competence.

    PubMed

    Brady, Erik D

    2015-11-01

    Few articles contemplate the need for good guidance in question item-writing in the continuing education (CE) space. Although many of the core principles of sound item design translate to the CE health education team, the need exists for specific examples for nurse educators that clearly describe how to measure changes in competence and knowledge using multiple choice items. In this article, some keys points and specific examples for nursing CE providers are shared. Copyright 2015, SLACK Incorporated.

  15. Annealed Importance Sampling Reversible Jump MCMC algorithms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Karagiannis, Georgios; Andrieu, Christophe

    2013-03-20

    It will soon be 20 years since reversible jump Markov chain Monte Carlo (RJ-MCMC) algorithms have been proposed. They have significantly extended the scope of Markov chain Monte Carlo simulation methods, offering the promise to be able to routinely tackle transdimensional sampling problems, as encountered in Bayesian model selection problems for example, in a principled and flexible fashion. Their practical efficient implementation, however, still remains a challenge. A particular difficulty encountered in practice is in the choice of the dimension matching variables (both their nature and their distribution) and the reversible transformations which allow one to define the one-to-one mappingsmore » underpinning the design of these algorithms. Indeed, even seemingly sensible choices can lead to algorithms with very poor performance. The focus of this paper is the development and performance evaluation of a method, annealed importance sampling RJ-MCMC (aisRJ), which addresses this problem by mitigating the sensitivity of RJ-MCMC algorithms to the aforementioned poor design. As we shall see the algorithm can be understood as being an “exact approximation” of an idealized MCMC algorithm that would sample from the model probabilities directly in a model selection set-up. Such an idealized algorithm may have good theoretical convergence properties, but typically cannot be implemented, and our algorithms can approximate the performance of such idealized algorithms to an arbitrary degree while not introducing any bias for any degree of approximation. Our approach combines the dimension matching ideas of RJ-MCMC with annealed importance sampling and its Markov chain Monte Carlo implementation. We illustrate the performance of the algorithm with numerical simulations which indicate that, although the approach may at first appear computationally involved, it is in fact competitive.« less

  16. 1 kHz 2D Visual Motion Sensor Using 20 × 20 Silicon Retina Optical Sensor and DSP Microcontroller.

    PubMed

    Liu, Shih-Chii; Yang, MinHao; Steiner, Andreas; Moeckel, Rico; Delbruck, Tobi

    2015-04-01

    Optical flow sensors have been a long running theme in neuromorphic vision sensors which include circuits that implement the local background intensity adaptation mechanism seen in biological retinas. This paper reports a bio-inspired optical motion sensor aimed towards miniature robotic and aerial platforms. It combines a 20 × 20 continuous-time CMOS silicon retina vision sensor with a DSP microcontroller. The retina sensor has pixels that have local gain control and adapt to background lighting. The system allows the user to validate various motion algorithms without building dedicated custom solutions. Measurements are presented to show that the system can compute global 2D translational motion from complex natural scenes using one particular algorithm: the image interpolation algorithm (I2A). With this algorithm, the system can compute global translational motion vectors at a sample rate of 1 kHz, for speeds up to ±1000 pixels/s, using less than 5 k instruction cycles (12 instructions per pixel) per frame. At 1 kHz sample rate the DSP is 12% occupied with motion computation. The sensor is implemented as a 6 g PCB consuming 170 mW of power.

  17. Automatic control algorithm effects on energy production

    NASA Technical Reports Server (NTRS)

    Mcnerney, G. M.

    1981-01-01

    A computer model was developed using actual wind time series and turbine performance data to simulate the power produced by the Sandia 17-m VAWT operating in automatic control. The model was used to investigate the influence of starting algorithms on annual energy production. The results indicate that, depending on turbine and local wind characteristics, a bad choice of a control algorithm can significantly reduce overall energy production. The model can be used to select control algorithms and threshold parameters that maximize long term energy production. The results from local site and turbine characteristics were generalized to obtain general guidelines for control algorithm design.

  18. One-dimensional swarm algorithm packaging

    NASA Astrophysics Data System (ADS)

    Lebedev, Boris K.; Lebedev, Oleg B.; Lebedeva, Ekaterina O.

    2018-05-01

    The paper considers an algorithm for solving the problem of onedimensional packaging based on the adaptive behavior model of an ant colony. The key role in the development of the ant algorithm is the choice of representation (interpretation) of the solution. The structure of the solution search graph, the procedure for finding solutions on the graph, the methods of deposition and evaporation of pheromone are described. Unlike the canonical paradigm of an ant algorithm, an ant on the solution search graph generates sets of elements distributed across blocks. Experimental studies were conducted on IBM PC. Compared with the existing algorithms, the results are improved.

  19. Genetic algorithms and their use in Geophysical Problems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Parker, Paul B.

    1999-04-01

    Genetic algorithms (GAs), global optimization methods that mimic Darwinian evolution are well suited to the nonlinear inverse problems of geophysics. A standard genetic algorithm selects the best or ''fittest'' models from a ''population'' and then applies operators such as crossover and mutation in order to combine the most successful characteristics of each model and produce fitter models. More sophisticated operators have been developed, but the standard GA usually provides a robust and efficient search. Although the choice of parameter settings such as crossover and mutation rate may depend largely on the type of problem being solved, numerous results show thatmore » certain parameter settings produce optimal performance for a wide range of problems and difficulties. In particular, a low (about half of the inverse of the population size) mutation rate is crucial for optimal results, but the choice of crossover method and rate do not seem to affect performance appreciably. Optimal efficiency is usually achieved with smaller (< 50) populations. Lastly, tournament selection appears to be the best choice of selection methods due to its simplicity and its autoscaling properties. However, if a proportional selection method is used such as roulette wheel selection, fitness scaling is a necessity, and a high scaling factor (> 2.0) should be used for the best performance. Three case studies are presented in which genetic algorithms are used to invert for crustal parameters. The first is an inversion for basement depth at Yucca mountain using gravity data, the second an inversion for velocity structure in the crust of the south island of New Zealand using receiver functions derived from teleseismic events, and the third is a similar receiver function inversion for crustal velocities beneath the Mendocino Triple Junction region of Northern California. The inversions demonstrate that genetic algorithms are effective in solving problems with reasonably large numbers of free parameters and with computationally expensive objective function calculations. More sophisticated techniques are presented for special problems. Niching and island model algorithms are introduced as methods to find multiple, distinct solutions to the nonunique problems that are typically seen in geophysics. Finally, hybrid algorithms are investigated as a way to improve the efficiency of the standard genetic algorithm.« less

  20. Genetic algorithms and their use in geophysical problems

    NASA Astrophysics Data System (ADS)

    Parker, Paul Bradley

    Genetic algorithms (GAs), global optimization methods that mimic Darwinian evolution are well suited to the nonlinear inverse problems of geophysics. A standard genetic algorithm selects the best or "fittest" models from a "population" and then applies operators such as crossover and mutation in order to combine the most successful characteristics of each model and produce fitter models. More sophisticated operators have been developed, but the standard GA usually provides a robust and efficient search. Although the choice of parameter settings such as crossover and mutation rate may depend largely on the type of problem being solved, numerous results show that certain parameter settings produce optimal performance for a wide range of problems and difficulties. In particular, a low (about half of the inverse of the population size) mutation rate is crucial for optimal results, but the choice of crossover method and rate do not seem to affect performance appreciably. Also, optimal efficiency is usually achieved with smaller (<50) populations. Lastly, tournament selection appears to be the best choice of selection methods due to its simplicity and its autoscaling properties. However, if a proportional selection method is used such as roulette wheel selection, fitness scaling is a necessity, and a high scaling factor (>2.0) should be used for the best performance. Three case studies are presented in which genetic algorithms are used to invert for crustal parameters. The first is an inversion for basement depth at Yucca mountain using gravity data, the second an inversion for velocity structure in the crust of the south island of New Zealand using receiver functions derived from teleseismic events, and the third is a similar receiver function inversion for crustal velocities beneath the Mendocino Triple Junction region of Northern California. The inversions demonstrate that genetic algorithms are effective in solving problems with reasonably large numbers of free parameters and with computationally expensive objective function calculations. More sophisticated techniques are presented for special problems. Niching and island model algorithms are introduced as methods to find multiple, distinct solutions to the nonunique problems that are typically seen in geophysics. Finally, hybrid algorithms are investigated as a way to improve the efficiency of the standard genetic algorithm.

  1. From the physics of interacting polymers to optimizing routes on the London Underground

    PubMed Central

    Yeung, Chi Ho; Saad, David; Wong, K. Y. Michael

    2013-01-01

    Optimizing paths on networks is crucial for many applications, ranging from subway traffic to Internet communication. Because global path optimization that takes account of all path choices simultaneously is computationally hard, most existing routing algorithms optimize paths individually, thus providing suboptimal solutions. We use the physics of interacting polymers and disordered systems to analyze macroscopic properties of generic path optimization problems and derive a simple, principled, generic, and distributed routing algorithm capable of considering all individual path choices simultaneously. We demonstrate the efficacy of the algorithm by applying it to: (i) random graphs resembling Internet overlay networks, (ii) travel on the London Underground network based on Oyster card data, and (iii) the global airport network. Analytically derived macroscopic properties give rise to insightful new routing phenomena, including phase transitions and scaling laws, that facilitate better understanding of the appropriate operational regimes and their limitations, which are difficult to obtain otherwise. PMID:23898198

  2. From the physics of interacting polymers to optimizing routes on the London Underground.

    PubMed

    Yeung, Chi Ho; Saad, David; Wong, K Y Michael

    2013-08-20

    Optimizing paths on networks is crucial for many applications, ranging from subway traffic to Internet communication. Because global path optimization that takes account of all path choices simultaneously is computationally hard, most existing routing algorithms optimize paths individually, thus providing suboptimal solutions. We use the physics of interacting polymers and disordered systems to analyze macroscopic properties of generic path optimization problems and derive a simple, principled, generic, and distributed routing algorithm capable of considering all individual path choices simultaneously. We demonstrate the efficacy of the algorithm by applying it to: (i) random graphs resembling Internet overlay networks, (ii) travel on the London Underground network based on Oyster card data, and (iii) the global airport network. Analytically derived macroscopic properties give rise to insightful new routing phenomena, including phase transitions and scaling laws, that facilitate better understanding of the appropriate operational regimes and their limitations, which are difficult to obtain otherwise.

  3. Discovery of numerous novel small genes in the intergenic regions of the Escherichia coli O157:H7 Sakai genome

    PubMed Central

    Hücker, Sarah M.; Ardern, Zachary; Goldberg, Tatyana; Schafferhans, Andrea; Bernhofer, Michael; Vestergaard, Gisle; Nelson, Chase W.; Schloter, Michael; Rost, Burkhard; Scherer, Siegfried

    2017-01-01

    In the past, short protein-coding genes were often disregarded by genome annotation pipelines. Transcriptome sequencing (RNAseq) signals outside of annotated genes have usually been interpreted to indicate either ncRNA or pervasive transcription. Therefore, in addition to the transcriptome, the translatome (RIBOseq) of the enteric pathogen Escherichia coli O157:H7 strain Sakai was determined at two optimal growth conditions and a severe stress condition combining low temperature and high osmotic pressure. All intergenic open reading frames potentially encoding a protein of ≥ 30 amino acids were investigated with regard to coverage by transcription and translation signals and their translatability expressed by the ribosomal coverage value. This led to discovery of 465 unique, putative novel genes not yet annotated in this E. coli strain, which are evenly distributed over both DNA strands of the genome. For 255 of the novel genes, annotated homologs in other bacteria were found, and a machine-learning algorithm, trained on small protein-coding E. coli genes, predicted that 89% of these translated open reading frames represent bona fide genes. The remaining 210 putative novel genes without annotated homologs were compared to the 255 novel genes with homologs and to 250 short annotated genes of this E. coli strain. All three groups turned out to be similar with respect to their translatability distribution, fractions of differentially regulated genes, secondary structure composition, and the distribution of evolutionary constraint, suggesting that both novel groups represent legitimate genes. However, the machine-learning algorithm only recognized a small fraction of the 210 genes without annotated homologs. It is possible that these genes represent a novel group of genes, which have unusual features dissimilar to the genes of the machine-learning algorithm training set. PMID:28902868

  4. Algorithm for Surface of Translation Attached Radiators (A-STAR). Volume 1: Formulation of the analysis

    NASA Astrophysics Data System (ADS)

    Medgyesimitschang, L. N.; Putnam, J. M.

    1982-05-01

    A general analytical formulation, based on the method of moments (MM) is described for solving electromagnetic problems associated with off-surface (wire) and aperture radiators on finite-length cylinders of arbitrary cross section, denoted in this report as bodies of translation (BOT). This class of bodies can be used to model structures with noncircular cross sections such as wings, fins and aircraft fuselages.

  5. Eigenvectors determination of the ribosome dynamics model during mRNA translation using the Kleene Star algorithm

    NASA Astrophysics Data System (ADS)

    Ernawati; Carnia, E.; Supriatna, A. K.

    2018-03-01

    Eigenvalues and eigenvectors in max-plus algebra have the same important role as eigenvalues and eigenvectors in conventional algebra. In max-plus algebra, eigenvalues and eigenvectors are useful for knowing dynamics of the system such as in train system scheduling, scheduling production systems and scheduling learning activities in moving classes. In the translation of proteins in which the ribosome move uni-directionally along the mRNA strand to recruit the amino acids that make up the protein, eigenvalues and eigenvectors are used to calculate protein production rates and density of ribosomes on the mRNA. Based on this, it is important to examine the eigenvalues and eigenvectors in the process of protein translation. In this paper an eigenvector formula is given for a ribosome dynamics during mRNA translation by using the Kleene star algorithm in which the resulting eigenvector formula is simpler and easier to apply to the system than that introduced elsewhere. This paper also discusses the properties of the matrix {B}λ \\otimes n of model. Among the important properties, it always has the same elements in the first column for n = 1, 2,… if the eigenvalue is the time of initiation, λ = τin , and the column is the eigenvector of the model corresponding to λ.

  6. Impact of respiratory-correlated CT sorting algorithms on the choice of margin definition for free-breathing lung radiotherapy treatments.

    PubMed

    Thengumpallil, Sheeba; Germond, Jean-François; Bourhis, Jean; Bochud, François; Moeckli, Raphaël

    2016-06-01

    To investigate the impact of Toshiba phase- and amplitude-sorting algorithms on the margin strategies for free-breathing lung radiotherapy treatments in the presence of breathing variations. 4D CT of a sphere inside a dynamic thorax phantom was acquired. The 4D CT was reconstructed according to the phase- and amplitude-sorting algorithms. The phantom was moved by reproducing amplitude, frequency, and a mix of amplitude and frequency variations. Artefact analysis was performed for Mid-Ventilation and ITV-based strategies on the images reconstructed by phase- and amplitude-sorting algorithms. The target volume deviation was assessed by comparing the target volume acquired during irregular motion to the volume acquired during regular motion. The amplitude-sorting algorithm shows reduced artefacts for only amplitude variations while the phase-sorting algorithm for only frequency variations. For amplitude and frequency variations, both algorithms perform similarly. Most of the artefacts are blurring and incomplete structures. We found larger artefacts and volume differences for the Mid-Ventilation with respect to the ITV strategy, resulting in a higher relative difference of the surface distortion value which ranges between maximum 14.6% and minimum 4.1%. The amplitude- is superior to the phase-sorting algorithm in the reduction of motion artefacts for amplitude variations while phase-sorting for frequency variations. A proper choice of 4D CT sorting algorithm is important in order to reduce motion artefacts, especially if Mid-Ventilation strategy is used. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  7. Evaluation of mathematical algorithms for automatic patient alignment in radiosurgery.

    PubMed

    Williams, Kenneth M; Schulte, Reinhard W; Schubert, Keith E; Wroe, Andrew J

    2015-06-01

    Image registration techniques based on anatomical features can serve to automate patient alignment for intracranial radiosurgery procedures in an effort to improve the accuracy and efficiency of the alignment process as well as potentially eliminate the need for implanted fiducial markers. To explore this option, four two-dimensional (2D) image registration algorithms were analyzed: the phase correlation technique, mutual information (MI) maximization, enhanced correlation coefficient (ECC) maximization, and the iterative closest point (ICP) algorithm. Digitally reconstructed radiographs from the treatment planning computed tomography scan of a human skull were used as the reference images, while orthogonal digital x-ray images taken in the treatment room were used as the captured images to be aligned. The accuracy of aligning the skull with each algorithm was compared to the alignment of the currently practiced procedure, which is based on a manual process of selecting common landmarks, including implanted fiducials and anatomical skull features. Of the four algorithms, three (phase correlation, MI maximization, and ECC maximization) demonstrated clinically adequate (ie, comparable to the standard alignment technique) translational accuracy and improvements in speed compared to the interactive, user-guided technique; however, the ICP algorithm failed to give clinically acceptable results. The results of this work suggest that a combination of different algorithms may provide the best registration results. This research serves as the initial groundwork for the translation of automated, anatomy-based 2D algorithms into a real-world system for 2D-to-2D image registration and alignment for intracranial radiosurgery. This may obviate the need for invasive implantation of fiducial markers into the skull and may improve treatment room efficiency and accuracy. © The Author(s) 2014.

  8. Accounting for hardware imperfections in EIT image reconstruction algorithms.

    PubMed

    Hartinger, Alzbeta E; Gagnon, Hervé; Guardo, Robert

    2007-07-01

    Electrical impedance tomography (EIT) is a non-invasive technique for imaging the conductivity distribution of a body section. Different types of EIT images can be reconstructed: absolute, time difference and frequency difference. Reconstruction algorithms are sensitive to many errors which translate into image artefacts. These errors generally result from incorrect modelling or inaccurate measurements. Every reconstruction algorithm incorporates a model of the physical set-up which must be as accurate as possible since any discrepancy with the actual set-up will cause image artefacts. Several methods have been proposed in the literature to improve the model realism, such as creating anatomical-shaped meshes, adding a complete electrode model and tracking changes in electrode contact impedances and positions. Absolute and frequency difference reconstruction algorithms are particularly sensitive to measurement errors and generally assume that measurements are made with an ideal EIT system. Real EIT systems have hardware imperfections that cause measurement errors. These errors translate into image artefacts since the reconstruction algorithm cannot properly discriminate genuine measurement variations produced by the medium under study from those caused by hardware imperfections. We therefore propose a method for eliminating these artefacts by integrating a model of the system hardware imperfections into the reconstruction algorithms. The effectiveness of the method has been evaluated by reconstructing absolute, time difference and frequency difference images with and without the hardware model from data acquired on a resistor mesh phantom. Results have shown that artefacts are smaller for images reconstructed with the model, especially for frequency difference imaging.

  9. A Flexible Statechart-to-Model-Checker Translator

    NASA Technical Reports Server (NTRS)

    Rouquette, Nicolas; Dunphy, Julia; Feather, Martin S.

    2000-01-01

    Many current-day software design tools offer some variant of statechart notation for system specification. We, like others, have built an automatic translator from (a subset of) statecharts to a model checker, for use to validate behavioral requirements. Our translator is designed to be flexible. This allows us to quickly adjust the translator to variants of statechart semantics, including problem-specific notational conventions that designers employ. Our system demonstration will be of interest to the following two communities: (1) Potential end-users: Our demonstration will show translation from statecharts created in a commercial UML tool (Rational Rose) to Promela, the input language of Holzmann's model checker SPIN. The translation is accomplished automatically. To accommodate the major variants of statechart semantics, our tool offers user-selectable choices among semantic alternatives. Options for customized semantic variants are also made available. The net result is an easy-to-use tool that operates on a wide range of statechart diagrams to automate the pathway to model-checking input. (2) Other researchers: Our translator embodies, in one tool, ideas and approaches drawn from several sources. Solutions to the major challenges of statechart-to-model-checker translation (e.g., determining which transition(s) will fire, handling of concurrent activities) are retired in a uniform, fully mechanized, setting. The way in which the underlying architecture of the translator itself facilitates flexible and customizable translation will also be evident.

  10. Synthesis of Algorithm for Range Measurement Equipment to Track Maneuvering Aircraft Using Data on Its Dynamic and Kinematic Parameters

    NASA Astrophysics Data System (ADS)

    Pudovkin, A. P.; Panasyuk, Yu N.; Danilov, S. N.; Moskvitin, S. P.

    2018-05-01

    The problem of improving automated air traffic control systems is considered through the example of the operation algorithm synthesis for a range measurement channel to track the aircraft, using its kinematic and dynamic parameters. The choice of the state and observation models has been justified, the computer simulations have been performed and the results of the investigated algorithms have been obtained.

  11. Translational informatics: an industry perspective.

    PubMed

    Cantor, Michael N

    2012-01-01

    Translational informatics (TI) is extremely important for the pharmaceutical industry, especially as the bar for regulatory approval of new medications is set higher and higher. This paper will explore three specific areas in the drug development lifecycle, from tools developed by precompetitive consortia to standardized clinical data collection to the effective delivery of medications using clinical decision support, in which TI has a major role to play. Advancing TI will require investment in new tools and algorithms, as well as ensuring that translational issues are addressed early in the design process of informatics projects, and also given higher weight in funding or publication decisions. Ultimately, the source of translational tools and differences between academia and industry are secondary, as long as they move towards the shared goal of improving health.

  12. Atmospheric River Tracking Method Intercomparison Project (ARTMIP): Science Goals and Preliminary Analysis

    NASA Astrophysics Data System (ADS)

    Shields, C. A.; Rutz, J. J.; Wehner, M. F.; Ralph, F. M.; Leung, L. R.

    2017-12-01

    The Atmospheric River Tracking Method Intercomparison Project (ARTMIP) is a community effort whose purpose is to quantify uncertainties in atmospheric river (AR) research solely due to different identification and tracking techniques. Atmospheric rivers transport significant amounts of moisture in long, narrow filamentary bands, typically travelling from the subtropics to the mid-latitudes. They are an important source of regional precipitation impacting local hydroclimate, and in extreme cases, cause severe flooding and infrastructure damage in local communities. Our understanding of ARs, from forecast skill to future climate projections, all hinge on how we define ARs. By comparing a diverse set of detection algorithms, the uncertainty in our definition of ARs, (including statistics and climatology), and the implications of those uncertainties, can be analyzed and quantified. ARTMIP is divided into two broad phases that aim to answer science questions impacted by choice of detection algorithm. How robust are AR metrics such as climatology, storm duration, and relationship to extreme precipitation? How are the AR metrics in future climate projections impacted by choice of algorithm? Some algorithms rely on threshold values for water vapor. In a warmer world, the background state, by definition, is moister due to the Clausius-Clapeyron relationship, and could potentially skew results. Can uncertainty bounds be accurately placed on each metric? Tier 1 participants will apply their algorithms to a high resolution common dataset (MERRA2) and provide the greater group AR metrics (frequency, location, duration, etc). Tier 2 research will encompass sensitivity studies regarding resolution, reanalysis choice, and future climate change scenarios. ARTMIP is currently in the Tier 1 Phase and will begin Tier 2 in 2018. Preliminary metrics and analysis from Tier 1 will be presented.

  13. Nonlinear Statistical Estimation with Numerical Maximum Likelihood

    DTIC Science & Technology

    1974-10-01

    probably most directly attributable to the speed, precision and compactness of the linear programming algorithm exercised ; the mutual primal-dual...discriminant analysis is to classify the individual as a member of T# or IT, 1 2 according to the relative...Introduction to the Dissertation 1 Introduction to Statistical Estimation Theory 3 Choice of Estimator.. .Density Functions 12 Choice of Estimator

  14. Classification of voting algorithms for N-version software

    NASA Astrophysics Data System (ADS)

    Tsarev, R. Yu; Durmuş, M. S.; Üstoglu, I.; Morozov, V. A.

    2018-05-01

    A voting algorithm in N-version software is a crucial component that evaluates the execution of each of the N versions and determines the correct result. Obviously, the result of the voting algorithm determines the outcome of the N-version software in general. Thus, the choice of the voting algorithm is a vital issue. A lot of voting algorithms were already developed and they may be selected for implementation based on the specifics of the analysis of input data. However, the voting algorithms applied in N-version software are not classified. This article presents an overview of classic and recent voting algorithms used in N-version software and the authors' classification of the voting algorithms. Moreover, the steps of the voting algorithms are presented and the distinctive features of the voting algorithms in Nversion software are defined.

  15. MED: a new non-supervised gene prediction algorithm for bacterial and archaeal genomes.

    PubMed

    Zhu, Huaiqiu; Hu, Gang-Qing; Yang, Yi-Fan; Wang, Jin; She, Zhen-Su

    2007-03-16

    Despite a remarkable success in the computational prediction of genes in Bacteria and Archaea, a lack of comprehensive understanding of prokaryotic gene structures prevents from further elucidation of differences among genomes. It continues to be interesting to develop new ab initio algorithms which not only accurately predict genes, but also facilitate comparative studies of prokaryotic genomes. This paper describes a new prokaryotic genefinding algorithm based on a comprehensive statistical model of protein coding Open Reading Frames (ORFs) and Translation Initiation Sites (TISs). The former is based on a linguistic "Entropy Density Profile" (EDP) model of coding DNA sequence and the latter comprises several relevant features related to the translation initiation. They are combined to form a so-called Multivariate Entropy Distance (MED) algorithm, MED 2.0, that incorporates several strategies in the iterative program. The iterations enable us to develop a non-supervised learning process and to obtain a set of genome-specific parameters for the gene structure, before making the prediction of genes. Results of extensive tests show that MED 2.0 achieves a competitive high performance in the gene prediction for both 5' and 3' end matches, compared to the current best prokaryotic gene finders. The advantage of the MED 2.0 is particularly evident for GC-rich genomes and archaeal genomes. Furthermore, the genome-specific parameters given by MED 2.0 match with the current understanding of prokaryotic genomes and may serve as tools for comparative genomic studies. In particular, MED 2.0 is shown to reveal divergent translation initiation mechanisms in archaeal genomes while making a more accurate prediction of TISs compared to the existing gene finders and the current GenBank annotation.

  16. Effects of audio-visual presentation of target words in word translation training

    NASA Astrophysics Data System (ADS)

    Akahane-Yamada, Reiko; Komaki, Ryo; Kubo, Rieko

    2004-05-01

    Komaki and Akahane-Yamada (Proc. ICA2004) used 2AFC translation task in vocabulary training, in which the target word is presented visually in orthographic form of one language, and the appropriate meaning in another language has to be chosen between two choices. Present paper examined the effect of audio-visual presentation of target word when native speakers of Japanese learn to translate English words into Japanese. Pairs of English words contrasted in several phonemic distinctions (e.g., /r/-/l/, /b/-/v/, etc.) were used as word materials, and presented in three conditions; visual-only (V), audio-only (A), and audio-visual (AV) presentations. Identification accuracy of those words produced by two talkers was also assessed. During pretest, the accuracy for A stimuli was lowest, implying that insufficient translation ability and listening ability interact with each other when aurally presented word has to be translated. However, there was no difference in accuracy between V and AV stimuli, suggesting that participants translate the words depending on visual information only. The effect of translation training using AV stimuli did not transfer to identification ability, showing that additional audio information during translation does not help improve speech perception. Further examination is necessary to determine the effective L2 training method. [Work supported by TAO, Japan.

  17. Ischaemic conditioning: pitfalls on the path to clinical translation

    PubMed Central

    Przyklenk, Karin

    2015-01-01

    The development of novel adjuvant strategies capable of attenuating myocardial ischaemia-reperfusion injury and reducing infarct size remains a major, unmet clinical need. A wealth of preclinical evidence has established that ischaemic ‘conditioning’ is profoundly cardioprotective, and has positioned the phenomenon (in particular, the paradigms of postconditioning and remote conditioning) as the most promising and potent candidate for clinical translation identified to date. However, despite this preclinical consensus, current phase II trials have been plagued by heterogeneity, and the outcomes of recent meta-analyses have largely failed to confirm significant benefit. As a result, the path to clinical application has been perceived as ‘disappointing’ and ‘frustrating’. The goal of the current review is to discuss the pitfalls that may be stalling the successful clinical translation of ischaemic conditioning, with an emphasis on concerns regarding: (i) appropriate clinical study design and (ii) the choice of the ‘right’ preclinical models to facilitate clinical translation. PMID:25560903

  18. Science for Society Workshop Summary Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wolfe, Amy K; Bjornstad, David J; Lenhardt, W Christopher

    Science for Society, a workshop held at the Oak Ridge National Laboratory (ORNL) on September 27, 20111, explored ways to move Laboratory science toward use. It sought actionable recommendations. Thus the workshop focused on: (1) current practices that promote and inhibit the translation of science into use, (2) principles that could lead to improving ORNL's translational knowledge and technology transfer efforts, and (3) specific recommendations for making these principles operational. This highly interactive workshop struck a positive chord with participants, a group of 26 ORNL staff members from diverse arenas of science and technology (S and T), technology transfer, andmore » external laboratory relations, who represented all levels of science, technology, and management. Recognizing that the transformation of fundamental principles into operational practices often follows a jagged path, the workshop sought to identify key choices that could lead to a smoother journey along this path, as well as choices that created roadblocks and bottlenecks. The workshop emphasized a portion of this pathway, largely excluding the marketplace. Participants noted that research translation includes linkages between fundamental and applied research and development (R and D), and is not restricted to uptake by manufacturers, consumers, or end users. Three crosscutting ideas encapsulate workshop participants observations: (1) ORNL should take more action to usher the translation of its S and T products toward use, so as to make a positive national and global impact and to enhance its own competitiveness in the future; (2) ORNL (and external entities such as DOE and Congress) conveys inconsistent messages with regard to the importance of research translation and application, which (a) creates confusion, (b) poses disincentives to pursue research translation, (c) imposes barriers that inhibit cross-fertilization and collaboration, and (d) diminishes the effectiveness of both the science mission and the translation of that science for use; and (3) ORNL should design its commitments and actions for helping move science from the Laboratory toward use to align with one another and should integrate them into its institutional culture in such a way as to elevate research translation and application to coequal status with scientific excellence. Participants made several actionable recommendations for enhancing research translation at ORNL, some of which were particular to specific S and T domains. Among the recommendations that participants agreed apply Lab-wide are to: align metrics and incentives with research translation goals; manage risks and conflicts of interest instead of avoiding them; and create programs (e.g., entrepreneurial leave) that promote interactions between key ORNL staff and industry in ways that complement careers at ORNL.« less

  19. Results of the First Year of Active for Life: Translation of 2 Evidence-Based Physical Activity Programs for Older Adults Into Community Settings

    PubMed Central

    Wilcox, Sara; Dowda, Marsha; Griffin, Sarah F.; Rheaume, Carol; Ory, Marcia G.; Leviton, Laura; King, Abby C.; Dunn, Andrea; Buchner, David M.; Bazzarre, Terry; Estabrooks, Paul A.; Campbell-Voytal, Kimberly; Bartlett-Prescott, Jenny; Dowdy, Diane; Castro, Cynthia M.; Carpenter, Ruth Ann; Dzewaltowski, David A.; Mockenhaupt, Robin

    2006-01-01

    Objectives. Translating efficacious interventions into practice within community settings is a major public health challenge. We evaluated the effects of 2 evidence-based physical activity interventions on self-reported physical activity and related outcomes in midlife and older adults. Methods. Four community-based organizations implemented Active Choices, a 6-month, telephone-based program, and 5 implemented Active Living Every Day, a 20-week, group-based program. Both programs emphasize behavioral skills necessary to become more physically active. Participants completed pretest and posttest surveys. Results. Participants (n=838) were aged an average of 68.4 ±9.4 years, 80.6% were women, and 64.1% were non-Hispanic White. Seventy-two percent returned posttest surveys. Intent-to-treat analyses found statistically significant increases in moderate-to-vigorous physical activity and total physical activity, decreases in depressive symptoms and stress, increases in satisfaction with body appearance and function, and decreases in body mass index. Conclusions. The first year of Active for Life demonstrated that Active Choices and Active Living Every Day, 2 evidence-based physical activity programs, can be successfully translated into community settings with diverse populations. Further, the magnitudes of change in outcomes were similar to those reported in the efficacy trials. PMID:16735619

  20. A Self Adaptive Differential Evolution Algorithm for Global Optimization

    NASA Astrophysics Data System (ADS)

    Kumar, Pravesh; Pant, Millie

    This paper presents a new Differential Evolution algorithm based on hybridization of adaptive control parameters and trigonometric mutation. First we propose a self adaptive DE named ADE where choice of control parameter F and Cr is not fixed at some constant value but is taken iteratively. The proposed algorithm is further modified by applying trigonometric mutation in it and the corresponding algorithm is named as ATDE. The performance of ATDE is evaluated on the set of 8 benchmark functions and the results are compared with the classical DE algorithm in terms of average fitness function value, number of function evaluations, convergence time and success rate. The numerical result shows the competence of the proposed algorithm.

  1. [Using cancer case identification algorithms in medico-administrative databases: Literature review and first results from the REDSIAM Tumors group based on breast, colon, and lung cancer].

    PubMed

    Bousquet, P-J; Caillet, P; Coeuret-Pellicer, M; Goulard, H; Kudjawu, Y C; Le Bihan, C; Lecuyer, A I; Séguret, F

    2017-10-01

    The development and use of healthcare databases accentuates the need for dedicated tools, including validated selection algorithms of cancer diseased patients. As part of the development of the French National Health Insurance System data network REDSIAM, the tumor taskforce established an inventory of national and internal published algorithms in the field of cancer. This work aims to facilitate the choice of a best-suited algorithm. A non-systematic literature search was conducted for various cancers. Results are presented for lung, breast, colon, and rectum. Medline, Scopus, the French Database in Public Health, Google Scholar, and the summaries of the main French journals in oncology and public health were searched for publications until August 2016. An extraction grid adapted to oncology was constructed and used for the extraction process. A total of 18 publications were selected for lung cancer, 18 for breast cancer, and 12 for colorectal cancer. Validation studies of algorithms are scarce. When information is available, the performance and choice of an algorithm are dependent on the context, purpose, and location of the planned study. Accounting for cancer disease specificity, the proposed extraction chart is more detailed than the generic chart developed for other REDSIAM taskforces, but remains easily usable in practice. This study illustrates the complexity of cancer detection through sole reliance on healthcare databases and the lack of validated algorithms specifically designed for this purpose. Studies that standardize and facilitate validation of these algorithms should be developed and promoted. Copyright © 2017. Published by Elsevier Masson SAS.

  2. Translation compensation and micro-Doppler extraction for precession ballistic targets with a wideband terahertz radar

    NASA Astrophysics Data System (ADS)

    Yang, Qi; Deng, Bin; Wang, Hongqiang; Zhang, Ye; Qin, Yuliang

    2018-01-01

    Imaging, classification, and recognition techniques of ballistic targets in midcourse have always been the focus of research in the radar field for military applications. However, the high velocity translation of ballistic targets will subject range profile and Doppler to translation, slope, and fold, which are especially severe in the terahertz region. Therefore, a two-step translation compensation method based on envelope alignment is presented. The rough compensation is based on the traditional envelope alignment algorithm in inverse synthetic aperture radar imaging, and the fine compensation is supported by distance fitting. Then, a wideband imaging radar system with a carrier frequency of 0.32 THz is introduced, and an experiment on a precession missile model is carried out. After translation compensation with the method proposed in this paper, the range profile and the micro-Doppler distributions unaffected by translation are obtained, providing an important foundation for the high-resolution imaging and micro-Doppler extraction of the terahertz radar.

  3. Neural correlates of forward planning in a spatial decision task in humans

    PubMed Central

    Simon, Dylan Alexander; Daw, Nathaniel D.

    2011-01-01

    Although reinforcement learning (RL) theories have been influential in characterizing the brain’s mechanisms for reward-guided choice, the predominant temporal difference (TD) algorithm cannot explain many flexible or goal-directed actions that have been demonstrated behaviorally. We investigate such actions by contrasting an RL algorithm that is model-based, in that it relies on learning a map or model of the task and planning within it, to traditional model-free TD learning. To distinguish these approaches in humans, we used fMRI in a continuous spatial navigation task, in which frequent changes to the layout of the maze forced subjects continually to relearn their favored routes, thereby exposing the RL mechanisms employed. We sought evidence for the neural substrates of such mechanisms by comparing choice behavior and BOLD signals to decision variables extracted from simulations of either algorithm. Both choices and value-related BOLD signals in striatum, though most often associated with TD learning, were better explained by the model-based theory. Further, predecessor quantities for the model-based value computation were correlated with BOLD signals in the medial temporal lobe and frontal cortex. These results point to a significant extension of both the computational and anatomical substrates for RL in the brain. PMID:21471389

  4. Gene selection for microarray cancer classification using a new evolutionary method employing artificial intelligence concepts.

    PubMed

    Dashtban, M; Balafar, Mohammadali

    2017-03-01

    Gene selection is a demanding task for microarray data analysis. The diverse complexity of different cancers makes this issue still challenging. In this study, a novel evolutionary method based on genetic algorithms and artificial intelligence is proposed to identify predictive genes for cancer classification. A filter method was first applied to reduce the dimensionality of feature space followed by employing an integer-coded genetic algorithm with dynamic-length genotype, intelligent parameter settings, and modified operators. The algorithmic behaviors including convergence trends, mutation and crossover rate changes, and running time were studied, conceptually discussed, and shown to be coherent with literature findings. Two well-known filter methods, Laplacian and Fisher score, were examined considering similarities, the quality of selected genes, and their influences on the evolutionary approach. Several statistical tests concerning choice of classifier, choice of dataset, and choice of filter method were performed, and they revealed some significant differences between the performance of different classifiers and filter methods over datasets. The proposed method was benchmarked upon five popular high-dimensional cancer datasets; for each, top explored genes were reported. Comparing the experimental results with several state-of-the-art methods revealed that the proposed method outperforms previous methods in DLBCL dataset. Copyright © 2017 Elsevier Inc. All rights reserved.

  5. An Algorithm to Improve Test Answer Copying Detection Using the Omega Statistic

    ERIC Educational Resources Information Center

    Maeda, Hotaka; Zhang, Bo

    2017-01-01

    The omega (?) statistic is reputed to be one of the best indices for detecting answer copying on multiple choice tests, but its performance relies on the accurate estimation of copier ability, which is challenging because responses from the copiers may have been contaminated. We propose an algorithm that aims to identify and delete the suspected…

  6. Cocaine choice procedures in animals, humans, and treatment-seekers: Can we bridge the divide?

    PubMed Central

    Moeller, Scott J.; Stoops, William W.

    2015-01-01

    Individuals with cocaine use disorder chronically self-administer cocaine to the detriment of other rewarding activities, a phenomenon best modeled in laboratory drug-choice procedures. These procedures can evaluate the reinforcing effects of drugs versus comparably valuable alternatives under multiple behavioral arrangements and schedules of reinforcement. However, assessing drug-choice in treatment-seeking or abstaining humans poses unique challenges: for ethical reasons, these populations typically cannot receive active drugs during research studies. Researchers have thus needed to rely on alternative approaches that approximate drug-choice behavior or assess more general forms of decision-making, but whether these alternatives have relevance to real-world drug-taking that can inform clinical trials is not well-understood. In this mini-review, we (A) summarize several important modulatory variables that influence cocaine choice in nonhuman animals and non-treatment seeking humans; (B) discuss some of the ethical considerations that could arise if treatment-seekers are enrolled in drug-choice studies; (C) consider the efficacy of alternative procedures, including non-drug-related decision-making and ‘simulated’ drug-choice (a choice is made, but no drug is administered) to approximate drug choice; and (D) suggest opportunities for new translational work to bridge the current divide between preclinical and clinical research. PMID:26432174

  7. Algorithms for Data Intensive Applications on Intelligent and Smart Memories

    DTIC Science & Technology

    2003-03-01

    editors). Parallel Algorithms and Architectures. North Holland, 1986. [8] P. Diniz . USC ISI, Personal Communication, March, 2001. [9] M. Frigo, C. E ...hierarchy as well as the Translation Lookaside Buer TLB aect the e ectiveness of cache friendly optimizations These penalties vary among...processors and cause large variations in the e ectiveness of cache performance optimizations The area of graph problems is fundamental in a wide variety of

  8. Evaluation of a treatment-based classification algorithm for low back pain: a cross-sectional study.

    PubMed

    Stanton, Tasha R; Fritz, Julie M; Hancock, Mark J; Latimer, Jane; Maher, Christopher G; Wand, Benedict M; Parent, Eric C

    2011-04-01

    Several studies have investigated criteria for classifying patients with low back pain (LBP) into treatment-based subgroups. A comprehensive algorithm was created to translate these criteria into a clinical decision-making guide. This study investigated the translation of the individual subgroup criteria into a comprehensive algorithm by studying the prevalence of patients meeting the criteria for each treatment subgroup and the reliability of the classification. This was a cross-sectional, observational study. Two hundred fifty patients with acute or subacute LBP were recruited from the United States and Australia to participate in the study. Trained physical therapists performed standardized assessments on all participants. The researchers used these findings to classify participants into subgroups. Thirty-one participants were reassessed to determine interrater reliability of the algorithm decision. Based on individual subgroup criteria, 25.2% (95% confidence interval [CI]=19.8%-30.6%) of the participants did not meet the criteria for any subgroup, 49.6% (95% CI=43.4%-55.8%) of the participants met the criteria for only one subgroup, and 25.2% (95% CI=19.8%-30.6%) of the participants met the criteria for more than one subgroup. The most common combination of subgroups was manipulation + specific exercise (68.4% of the participants who met the criteria for 2 subgroups). Reliability of the algorithm decision was moderate (kappa=0.52, 95% CI=0.27-0.77, percentage of agreement=67%). Due to a relatively small patient sample, reliability estimates are somewhat imprecise. These findings provide important clinical data to guide future research and revisions to the algorithm. The finding that 25% of the participants met the criteria for more than one subgroup has important implications for the sequencing of treatments in the algorithm. Likewise, the finding that 25% of the participants did not meet the criteria for any subgroup provides important information regarding potential revisions to the algorithm's bottom table (which guides unclear classifications). Reliability of the algorithm is sufficient for clinical use.

  9. Fast registration and reconstruction of aliased low-resolution frames by use of a modified maximum-likelihood approach.

    PubMed

    Alam, M S; Bognar, J G; Cain, S; Yasuda, B J

    1998-03-10

    During the process of microscanning a controlled vibrating mirror typically is used to produce subpixel shifts in a sequence of forward-looking infrared (FLIR) images. If the FLIR is mounted on a moving platform, such as an aircraft, uncontrolled random vibrations associated with the platform can be used to generate the shifts. Iterative techniques such as the expectation-maximization (EM) approach by means of the maximum-likelihood algorithm can be used to generate high-resolution images from multiple randomly shifted aliased frames. In the maximum-likelihood approach the data are considered to be Poisson random variables and an EM algorithm is developed that iteratively estimates an unaliased image that is compensated for known imager-system blur while it simultaneously estimates the translational shifts. Although this algorithm yields high-resolution images from a sequence of randomly shifted frames, it requires significant computation time and cannot be implemented for real-time applications that use the currently available high-performance processors. The new image shifts are iteratively calculated by evaluation of a cost function that compares the shifted and interlaced data frames with the corresponding values in the algorithm's latest estimate of the high-resolution image. We present a registration algorithm that estimates the shifts in one step. The shift parameters provided by the new algorithm are accurate enough to eliminate the need for iterative recalculation of translational shifts. Using this shift information, we apply a simplified version of the EM algorithm to estimate a high-resolution image from a given sequence of video frames. The proposed modified EM algorithm has been found to reduce significantly the computational burden when compared with the original EM algorithm, thus making it more attractive for practical implementation. Both simulation and experimental results are presented to verify the effectiveness of the proposed technique.

  10. Automatic elastic image registration by interpolation of 3D rotations and translations from discrete rigid-body transformations.

    PubMed

    Walimbe, Vivek; Shekhar, Raj

    2006-12-01

    We present an algorithm for automatic elastic registration of three-dimensional (3D) medical images. Our algorithm initially recovers the global spatial mismatch between the reference and floating images, followed by hierarchical octree-based subdivision of the reference image and independent registration of the floating image with the individual subvolumes of the reference image at each hierarchical level. Global as well as local registrations use the six-parameter full rigid-body transformation model and are based on maximization of normalized mutual information (NMI). To ensure robustness of the subvolume registration with low voxel counts, we calculate NMI using a combination of current and prior mutual histograms. To generate a smooth deformation field, we perform direct interpolation of six-parameter rigid-body subvolume transformations obtained at the last subdivision level. Our interpolation scheme involves scalar interpolation of the 3D translations and quaternion interpolation of the 3D rotational pose. We analyzed the performance of our algorithm through experiments involving registration of synthetically deformed computed tomography (CT) images. Our algorithm is general and can be applied to image pairs of any two modalities of most organs. We have demonstrated successful registration of clinical whole-body CT and positron emission tomography (PET) images using this algorithm. The registration accuracy for this application was evaluated, based on validation using expert-identified anatomical landmarks in 15 CT-PET image pairs. The algorithm's performance was comparable to the average accuracy observed for three expert-determined registrations in the same 15 image pairs.

  11. Algorithmic analysis of relational learning processes in instructional technology: Some implications for basic, translational, and applied research.

    PubMed

    McIlvane, William J; Kledaras, Joanne B; Gerard, Christophe J; Wilde, Lorin; Smelson, David

    2018-07-01

    A few noteworthy exceptions notwithstanding, quantitative analyses of relational learning are most often simple descriptive measures of study outcomes. For example, studies of stimulus equivalence have made much progress using measures such as percentage consistent with equivalence relations, discrimination ratio, and response latency. Although procedures may have ad hoc variations, they remain fairly similar across studies. Comparison studies of training variables that lead to different outcomes are few. Yet to be developed are tools designed specifically for dynamic and/or parametric analyses of relational learning processes. This paper will focus on recent studies to develop (1) quality computer-based programmed instruction for supporting relational learning in children with autism spectrum disorders and intellectual disabilities and (2) formal algorithms that permit ongoing, dynamic assessment of learner performance and procedure changes to optimize instructional efficacy and efficiency. Because these algorithms have a strong basis in evidence and in theories of stimulus control, they may have utility also for basic and translational research. We present an overview of the research program, details of algorithm features, and summary results that illustrate their possible benefits. It also presents arguments that such algorithm development may encourage parametric research, help in integrating new research findings, and support in-depth quantitative analyses of stimulus control processes in relational learning. Such algorithms may also serve to model control of basic behavioral processes that is important to the design of effective programmed instruction for human learners with and without functional disabilities. Copyright © 2018 Elsevier B.V. All rights reserved.

  12. The Interface of Clinical Decision-Making With Study Protocols for Knowledge Translation From a Walking Recovery Trial.

    PubMed

    Hershberg, Julie A; Rose, Dorian K; Tilson, Julie K; Brutsch, Bettina; Correa, Anita; Gallichio, Joann; McLeod, Molly; Moore, Craig; Wu, Sam; Duncan, Pamela W; Behrman, Andrea L

    2017-01-01

    Despite efforts to translate knowledge into clinical practice, barriers often arise in adapting the strict protocols of a randomized, controlled trial (RCT) to the individual patient. The Locomotor Experience Applied Post-Stroke (LEAPS) RCT demonstrated equal effectiveness of 2 intervention protocols for walking recovery poststroke; both protocols were more effective than usual care physical therapy. The purpose of this article was to provide knowledge-translation tools to facilitate implementation of the LEAPS RCT protocols into clinical practice. Participants from 2 of the trial's intervention arms: (1) early Locomotor Training Program (LTP) and (2) Home Exercise Program (HEP) were chosen for case presentation. The two cases illustrate how the protocols are used in synergy with individual patient presentations and clinical expertise. Decision algorithms and guidelines for progression represent the interface between implementation of an RCT standardized intervention protocol and clinical decision-making. In each case, the participant presents with a distinct clinical challenge that the therapist addresses by integrating the participant's unique presentation with the therapist's expertise while maintaining fidelity to the LEAPS protocol. Both participants progressed through an increasingly challenging intervention despite their own unique presentation. Decision algorithms and exercise progression for the LTP and HEP protocols facilitate translation of the RCT protocol to the real world of clinical practice. The two case examples to facilitate translation of the LEAPS RCT into clinical practice by enhancing understanding of the protocols, their progression, and their application to individual participants.Video Abstract available for more insights from the authors (see Supplemental Digital Content 1, available at: http://links.lww.com/JNPT/A147).

  13. Noise analysis of genome-scale protein synthesis using a discrete computational model of translation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Racle, Julien; Hatzimanikatis, Vassily, E-mail: vassily.hatzimanikatis@epfl.ch; Swiss Institute of Bioinformatics

    2015-07-28

    Noise in genetic networks has been the subject of extensive experimental and computational studies. However, very few of these studies have considered noise properties using mechanistic models that account for the discrete movement of ribosomes and RNA polymerases along their corresponding templates (messenger RNA (mRNA) and DNA). The large size of these systems, which scales with the number of genes, mRNA copies, codons per mRNA, and ribosomes, is responsible for some of the challenges. Additionally, one should be able to describe the dynamics of ribosome exchange between the free ribosome pool and those bound to mRNAs, as well as howmore » mRNA species compete for ribosomes. We developed an efficient algorithm for stochastic simulations that addresses these issues and used it to study the contribution and trade-offs of noise to translation properties (rates, time delays, and rate-limiting steps). The algorithm scales linearly with the number of mRNA copies, which allowed us to study the importance of genome-scale competition between mRNAs for the same ribosomes. We determined that noise is minimized under conditions maximizing the specific synthesis rate. Moreover, sensitivity analysis of the stochastic system revealed the importance of the elongation rate in the resultant noise, whereas the translation initiation rate constant was more closely related to the average protein synthesis rate. We observed significant differences between our results and the noise properties of the most commonly used translation models. Overall, our studies demonstrate that the use of full mechanistic models is essential for the study of noise in translation and transcription.« less

  14. The new and computationally efficient MIL-SOM algorithm: potential benefits for visualization and analysis of a large-scale high-dimensional clinically acquired geographic data.

    PubMed

    Oyana, Tonny J; Achenie, Luke E K; Heo, Joon

    2012-01-01

    The objective of this paper is to introduce an efficient algorithm, namely, the mathematically improved learning-self organizing map (MIL-SOM) algorithm, which speeds up the self-organizing map (SOM) training process. In the proposed MIL-SOM algorithm, the weights of Kohonen's SOM are based on the proportional-integral-derivative (PID) controller. Thus, in a typical SOM learning setting, this improvement translates to faster convergence. The basic idea is primarily motivated by the urgent need to develop algorithms with the competence to converge faster and more efficiently than conventional techniques. The MIL-SOM algorithm is tested on four training geographic datasets representing biomedical and disease informatics application domains. Experimental results show that the MIL-SOM algorithm provides a competitive, better updating procedure and performance, good robustness, and it runs faster than Kohonen's SOM.

  15. The New and Computationally Efficient MIL-SOM Algorithm: Potential Benefits for Visualization and Analysis of a Large-Scale High-Dimensional Clinically Acquired Geographic Data

    PubMed Central

    Oyana, Tonny J.; Achenie, Luke E. K.; Heo, Joon

    2012-01-01

    The objective of this paper is to introduce an efficient algorithm, namely, the mathematically improved learning-self organizing map (MIL-SOM) algorithm, which speeds up the self-organizing map (SOM) training process. In the proposed MIL-SOM algorithm, the weights of Kohonen's SOM are based on the proportional-integral-derivative (PID) controller. Thus, in a typical SOM learning setting, this improvement translates to faster convergence. The basic idea is primarily motivated by the urgent need to develop algorithms with the competence to converge faster and more efficiently than conventional techniques. The MIL-SOM algorithm is tested on four training geographic datasets representing biomedical and disease informatics application domains. Experimental results show that the MIL-SOM algorithm provides a competitive, better updating procedure and performance, good robustness, and it runs faster than Kohonen's SOM. PMID:22481977

  16. Translation of one high-level language to another: COBOL to ADA, an example

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hill, J.A.

    1986-01-01

    This dissertation discusses the difficulties encountered in, and explores possible solutions to, the task of automatically converting programs written in one HLL, COBOL, into programs written in another HLL, Ada, and still maintain readability. This paper presents at least one set of techniques and algorithms to solve many of the problems that were encountered. The differing view of records is solved by isolating those instances where it is a problem, then using the RENAMES option of Ada. Several solutions to doing the decimal-arithmetic translation are discussed. One method used is to emulate COBOL arithmetic in an arithmetic package. Another partialmore » solution suggested is to convert the values to decimal-scaled integers and use modular arithmetic. Conversion to fixed-point type and floating-point type are the third and fourth methods. The work of another researcher, Bobby Othmer, is utilized to correct any unstructured code, to remap statements not directly translatable such as ALTER, and to pull together isolated code sections. Algorithms are then presented to convert this restructured COBOL code into Ada code with local variables, parameters, and packages. The input/output requirements are partially met by mapping them to a series of procedure calls that interface with Ada's standard input-output package. Several examples are given of hand translations of COBOL programs. In addition, a possibly new method is shown for measuring the readability of programs.« less

  17. Optical-Fiber-Welding Machine

    NASA Technical Reports Server (NTRS)

    Goss, W. C.; Mann, W. A.; Goldstein, R.

    1985-01-01

    Technique yields joints with average transmissivity of 91.6 percent. Electric arc passed over butted fiber ends to melt them together. Maximum optical transmissivity of joint achieved with optimum choice of discharge current, translation speed, and axial compression of fibers. Practical welding machine enables delicate and tedious joining operation performed routinely.

  18. A simple computational algorithm of model-based choice preference.

    PubMed

    Toyama, Asako; Katahira, Kentaro; Ohira, Hideki

    2017-08-01

    A broadly used computational framework posits that two learning systems operate in parallel during the learning of choice preferences-namely, the model-free and model-based reinforcement-learning systems. In this study, we examined another possibility, through which model-free learning is the basic system and model-based information is its modulator. Accordingly, we proposed several modified versions of a temporal-difference learning model to explain the choice-learning process. Using the two-stage decision task developed by Daw, Gershman, Seymour, Dayan, and Dolan (2011), we compared their original computational model, which assumes a parallel learning process, and our proposed models, which assume a sequential learning process. Choice data from 23 participants showed a better fit with the proposed models. More specifically, the proposed eligibility adjustment model, which assumes that the environmental model can weight the degree of the eligibility trace, can explain choices better under both model-free and model-based controls and has a simpler computational algorithm than the original model. In addition, the forgetting learning model and its variation, which assume changes in the values of unchosen actions, substantially improved the fits to the data. Overall, we show that a hybrid computational model best fits the data. The parameters used in this model succeed in capturing individual tendencies with respect to both model use in learning and exploration behavior. This computational model provides novel insights into learning with interacting model-free and model-based components.

  19. Multiscale stochastic simulations of chemical reactions with regulated scale separation

    NASA Astrophysics Data System (ADS)

    Koumoutsakos, Petros; Feigelman, Justin

    2013-07-01

    We present a coupling of multiscale frameworks with accelerated stochastic simulation algorithms for systems of chemical reactions with disparate propensities. The algorithms regulate the propensities of the fast and slow reactions of the system, using alternating micro and macro sub-steps simulated with accelerated algorithms such as τ and R-leaping. The proposed algorithms are shown to provide significant speedups in simulations of stiff systems of chemical reactions with a trade-off in accuracy as controlled by a regulating parameter. More importantly, the error of the methods exhibits a cutoff phenomenon that allows for optimal parameter choices. Numerical experiments demonstrate that hybrid algorithms involving accelerated stochastic simulations can be, in certain cases, more accurate while faster, than their corresponding stochastic simulation algorithm counterparts.

  20. Reproducibility of graph metrics of human brain structural networks.

    PubMed

    Duda, Jeffrey T; Cook, Philip A; Gee, James C

    2014-01-01

    Recent interest in human brain connectivity has led to the application of graph theoretical analysis to human brain structural networks, in particular white matter connectivity inferred from diffusion imaging and fiber tractography. While these methods have been used to study a variety of patient populations, there has been less examination of the reproducibility of these methods. A number of tractography algorithms exist and many of these are known to be sensitive to user-selected parameters. The methods used to derive a connectivity matrix from fiber tractography output may also influence the resulting graph metrics. Here we examine how these algorithm and parameter choices influence the reproducibility of proposed graph metrics on a publicly available test-retest dataset consisting of 21 healthy adults. The dice coefficient is used to examine topological similarity of constant density subgraphs both within and between subjects. Seven graph metrics are examined here: mean clustering coefficient, characteristic path length, largest connected component size, assortativity, global efficiency, local efficiency, and rich club coefficient. The reproducibility of these network summary measures is examined using the intraclass correlation coefficient (ICC). Graph curves are created by treating the graph metrics as functions of a parameter such as graph density. Functional data analysis techniques are used to examine differences in graph measures that result from the choice of fiber tracking algorithm. The graph metrics consistently showed good levels of reproducibility as measured with ICC, with the exception of some instability at low graph density levels. The global and local efficiency measures were the most robust to the choice of fiber tracking algorithm.

  1. Computing effective properties of random heterogeneous materials on heterogeneous parallel processors

    NASA Astrophysics Data System (ADS)

    Leidi, Tiziano; Scocchi, Giulio; Grossi, Loris; Pusterla, Simone; D'Angelo, Claudio; Thiran, Jean-Philippe; Ortona, Alberto

    2012-11-01

    In recent decades, finite element (FE) techniques have been extensively used for predicting effective properties of random heterogeneous materials. In the case of very complex microstructures, the choice of numerical methods for the solution of this problem can offer some advantages over classical analytical approaches, and it allows the use of digital images obtained from real material samples (e.g., using computed tomography). On the other hand, having a large number of elements is often necessary for properly describing complex microstructures, ultimately leading to extremely time-consuming computations and high memory requirements. With the final objective of reducing these limitations, we improved an existing freely available FE code for the computation of effective conductivity (electrical and thermal) of microstructure digital models. To allow execution on hardware combining multi-core CPUs and a GPU, we first translated the original algorithm from Fortran to C, and we subdivided it into software components. Then, we enhanced the C version of the algorithm for parallel processing with heterogeneous processors. With the goal of maximizing the obtained performances and limiting resource consumption, we utilized a software architecture based on stream processing, event-driven scheduling, and dynamic load balancing. The parallel processing version of the algorithm has been validated using a simple microstructure consisting of a single sphere located at the centre of a cubic box, yielding consistent results. Finally, the code was used for the calculation of the effective thermal conductivity of a digital model of a real sample (a ceramic foam obtained using X-ray computed tomography). On a computer equipped with dual hexa-core Intel Xeon X5670 processors and an NVIDIA Tesla C2050, the parallel application version features near to linear speed-up progression when using only the CPU cores. It executes more than 20 times faster when additionally using the GPU.

  2. A Rule-Based System Implementing a Method for Translating FOL Formulas into NL Sentences

    NASA Astrophysics Data System (ADS)

    Mpagouli, Aikaterini; Hatzilygeroudis, Ioannis

    In this paper, we mainly present the implementation of a system that translates first order logic (FOL) formulas into natural language (NL) sentences. The motivation comes from an intelligent tutoring system teaching logic as a knowledge representation language, where it is used as a means for feedback to the students-users. FOL to NL conversion is achieved by using a rule-based approach, where we exploit the pattern matching capabilities of rules. So, the system consists of rule-based modules corresponding to the phases of our translation methodology. Facts are used in a lexicon providing lexical and grammatical information that helps in producing the NL sentences. The whole system is implemented in Jess, a java-implemented rule-based programming tool. Experimental results confirm the success of our choices.

  3. Cuckoo Search Algorithm Based on Repeat-Cycle Asymptotic Self-Learning and Self-Evolving Disturbance for Function Optimization

    PubMed Central

    Wang, Jie-sheng; Li, Shu-xia; Song, Jiang-di

    2015-01-01

    In order to improve convergence velocity and optimization accuracy of the cuckoo search (CS) algorithm for solving the function optimization problems, a new improved cuckoo search algorithm based on the repeat-cycle asymptotic self-learning and self-evolving disturbance (RC-SSCS) is proposed. A disturbance operation is added into the algorithm by constructing a disturbance factor to make a more careful and thorough search near the bird's nests location. In order to select a reasonable repeat-cycled disturbance number, a further study on the choice of disturbance times is made. Finally, six typical test functions are adopted to carry out simulation experiments, meanwhile, compare algorithms of this paper with two typical swarm intelligence algorithms particle swarm optimization (PSO) algorithm and artificial bee colony (ABC) algorithm. The results show that the improved cuckoo search algorithm has better convergence velocity and optimization accuracy. PMID:26366164

  4. Modeling and executing electronic health records driven phenotyping algorithms using the NQF Quality Data Model and JBoss® Drools Engine.

    PubMed

    Li, Dingcheng; Endle, Cory M; Murthy, Sahana; Stancl, Craig; Suesse, Dale; Sottara, Davide; Huff, Stanley M; Chute, Christopher G; Pathak, Jyotishman

    2012-01-01

    With increasing adoption of electronic health records (EHRs), the need for formal representations for EHR-driven phenotyping algorithms has been recognized for some time. The recently proposed Quality Data Model from the National Quality Forum (NQF) provides an information model and a grammar that is intended to represent data collected during routine clinical care in EHRs as well as the basic logic required to represent the algorithmic criteria for phenotype definitions. The QDM is further aligned with Meaningful Use standards to ensure that the clinical data and algorithmic criteria are represented in a consistent, unambiguous and reproducible manner. However, phenotype definitions represented in QDM, while structured, cannot be executed readily on existing EHRs. Rather, human interpretation, and subsequent implementation is a required step for this process. To address this need, the current study investigates open-source JBoss® Drools rules engine for automatic translation of QDM criteria into rules for execution over EHR data. In particular, using Apache Foundation's Unstructured Information Management Architecture (UIMA) platform, we developed a translator tool for converting QDM defined phenotyping algorithm criteria into executable Drools rules scripts, and demonstrated their execution on real patient data from Mayo Clinic to identify cases for Coronary Artery Disease and Diabetes. To the best of our knowledge, this is the first study illustrating a framework and an approach for executing phenotyping criteria modeled in QDM using the Drools business rules management system.

  5. Modeling and Executing Electronic Health Records Driven Phenotyping Algorithms using the NQF Quality Data Model and JBoss® Drools Engine

    PubMed Central

    Li, Dingcheng; Endle, Cory M; Murthy, Sahana; Stancl, Craig; Suesse, Dale; Sottara, Davide; Huff, Stanley M.; Chute, Christopher G.; Pathak, Jyotishman

    2012-01-01

    With increasing adoption of electronic health records (EHRs), the need for formal representations for EHR-driven phenotyping algorithms has been recognized for some time. The recently proposed Quality Data Model from the National Quality Forum (NQF) provides an information model and a grammar that is intended to represent data collected during routine clinical care in EHRs as well as the basic logic required to represent the algorithmic criteria for phenotype definitions. The QDM is further aligned with Meaningful Use standards to ensure that the clinical data and algorithmic criteria are represented in a consistent, unambiguous and reproducible manner. However, phenotype definitions represented in QDM, while structured, cannot be executed readily on existing EHRs. Rather, human interpretation, and subsequent implementation is a required step for this process. To address this need, the current study investigates open-source JBoss® Drools rules engine for automatic translation of QDM criteria into rules for execution over EHR data. In particular, using Apache Foundation’s Unstructured Information Management Architecture (UIMA) platform, we developed a translator tool for converting QDM defined phenotyping algorithm criteria into executable Drools rules scripts, and demonstrated their execution on real patient data from Mayo Clinic to identify cases for Coronary Artery Disease and Diabetes. To the best of our knowledge, this is the first study illustrating a framework and an approach for executing phenotyping criteria modeled in QDM using the Drools business rules management system. PMID:23304325

  6. Performance comparison of attitude determination, attitude estimation, and nonlinear observers algorithms

    NASA Astrophysics Data System (ADS)

    MOHAMMED, M. A. SI; BOUSSADIA, H.; BELLAR, A.; ADNANE, A.

    2017-01-01

    This paper presents a brief synthesis and useful performance analysis of different attitude filtering algorithms (attitude determination algorithms, attitude estimation algorithms, and nonlinear observers) applied to Low Earth Orbit Satellite in terms of accuracy, convergence time, amount of memory, and computation time. This latter is calculated in two ways, using a personal computer and also using On-board computer 750 (OBC 750) that is being used in many SSTL Earth observation missions. The use of this comparative study could be an aided design tool to the designer to choose from an attitude determination or attitude estimation or attitude observer algorithms. The simulation results clearly indicate that the nonlinear Observer is the more logical choice.

  7. Airport Flight Departure Delay Model on Improved BN Structure Learning

    NASA Astrophysics Data System (ADS)

    Cao, Weidong; Fang, Xiangnong

    An high score prior genetic simulated annealing Bayesian network structure learning algorithm (HSPGSA) by combining genetic algorithm(GA) with simulated annealing algorithm(SAA) is developed. The new algorithm provides not only with strong global search capability of GA, but also with strong local hill climb search capability of SAA. The structure with the highest score is prior selected. In the mean time, structures with lower score are also could be choice. It can avoid efficiently prematurity problem by higher score individual wrong direct growing population. Algorithm is applied to flight departure delays analysis in a large hub airport. Based on the flight data a BN model is created. Experiments show that parameters learning can reflect departure delay.

  8. Towards an optimal treatment algorithm for metastatic pancreatic ductal adenocarcinoma (PDA)

    PubMed Central

    Uccello, M.; Moschetta, M.; Mak, G.; Alam, T.; Henriquez, C. Murias; Arkenau, H.-T.

    2018-01-01

    Chemotherapy remains the mainstay of treatment for advanced pancreatic ductal adenocarcinoma (pda). Two randomized trials have demonstrated superiority of the combination regimens folfirinox (5-fluorouracil, leucovorin, oxaliplatin, and irinotecan) and gemcitabine plus nab-paclitaxel over gemcitabine monotherapy as a first-line treatment in adequately fit subjects. Selected pda patients progressing to first-line therapy can receive secondline treatment with moderate clinical benefit. Nevertheless, the optimal algorithm and the role of combination therapy in second-line are still unclear. Published second-line pda clinical trials enrolled patients progressing to gemcitabine-based therapies in use before the approval of nab-paclitaxel and folfirinox. The evolving scenario in second-line may affect the choice of the first-line treatment. For example, nanoliposomal irinotecan plus 5-fluouracil and leucovorin is a novel second-line option which will be suitable only for patients progressing to gemcitabine-based therapy. Therefore, clinical judgement and appropriate patient selection remain key elements in treatment decision. In this review, we aim to illustrate currently available options and define a possible algorithm to guide treatment choice. Future clinical trials taking into account sequential treatment as a new paradigm in pda will help define a standard algorithm. PMID:29507500

  9. Artificial Intelligence Methods: Choice of algorithms, their complexity, and appropriateness within the context of hydrology and water resources. (Invited)

    NASA Astrophysics Data System (ADS)

    Bastidas, L. A.; Pande, S.

    2009-12-01

    Pattern analysis deals with the automatic detection of patterns in the data and there are a variety of algorithms available for the purpose. These algorithms are commonly called Artificial Intelligence (AI) or data driven algorithms, and have been applied lately to a variety of problems in hydrology and are becoming extremely popular. When confronting such a range of algorithms, the question of which one is the “best” arises. Some algorithms may be preferred because of the lower computational complexity; others take into account prior knowledge of the form and the amount of the data; others are chosen based on a version of the Occam’s razor principle that a simple classifier performs better. Popper has argued, however, that Occam’s razor is without operational value because there is no clear measure or criterion for simplicity. An example of measures that can be used for this purpose are: the so called algorithmic complexity - also known as Kolmogorov complexity or Kolmogorov (algorithmic) entropy; the Bayesian information criterion; or the Vapnik-Chervonenkis dimension. On the other hand, the No Free Lunch Theorem states that there is no best general algorithm, and that specific algorithms are superior only for specific problems. It should be noted also that the appropriate algorithm and the appropriate complexity are constrained by the finiteness of the available data and the uncertainties associated with it. Thus, there is compromise between the complexity of the algorithm, the data properties, and the robustness of the predictions. We discuss the above topics; briefly review the historical development of applications with particular emphasis on statistical learning theory (SLT), also known as machine learning (ML) of which support vector machines and relevant vector machines are the most commonly known algorithms. We present some applications of such algorithms for distributed hydrologic modeling; and introduce an example of how the complexity measure can be applied for appropriate model choice within the context of applications in hydrologic modeling intended for use in studies about water resources and water resources management and their direct relation to extreme conditions or natural hazards.

  10. Automatic Data Distribution for CFD Applications on Structured Grids

    NASA Technical Reports Server (NTRS)

    Frumkin, Michael; Yan, Jerry

    2000-01-01

    Data distribution is an important step in implementation of any parallel algorithm. The data distribution determines data traffic, utilization of the interconnection network and affects the overall code efficiency. In recent years a number data distribution methods have been developed and used in real programs for improving data traffic. We use some of the methods for translating data dependence and affinity relations into data distribution directives. We describe an automatic data alignment and placement tool (ADAFT) which implements these methods and show it results for some CFD codes (NPB and ARC3D). Algorithms for program analysis and derivation of data distribution implemented in ADAFT are efficient three pass algorithms. Most algorithms have linear complexity with the exception of some graph algorithms having complexity O(n(sup 4)) in the worst case.

  11. Automatic Data Distribution for CFD Applications on Structured Grids

    NASA Technical Reports Server (NTRS)

    Frumkin, Michael; Yan, Jerry

    1999-01-01

    Data distribution is an important step in implementation of any parallel algorithm. The data distribution determines data traffic, utilization of the interconnection network and affects the overall code efficiency. In recent years a number data distribution methods have been developed and used in real programs for improving data traffic. We use some of the methods for translating data dependence and affinity relations into data distribution directives. We describe an automatic data alignment and placement tool (ADAPT) which implements these methods and show it results for some CFD codes (NPB and ARC3D). Algorithms for program analysis and derivation of data distribution implemented in ADAPT are efficient three pass algorithms. Most algorithms have linear complexity with the exception of some graph algorithms having complexity O(n(sup 4)) in the worst case.

  12. The psychopharmacology algorithm project at the Harvard South Shore Program: an algorithm for acute mania.

    PubMed

    Mohammad, Othman; Osser, David N

    2014-01-01

    This new algorithm for the pharmacotherapy of acute mania was developed by the Psychopharmacology Algorithm Project at the Harvard South Shore Program. The authors conducted a literature search in PubMed and reviewed key studies, other algorithms and guidelines, and their references. Treatments were prioritized considering three main considerations: (1) effectiveness in treating the current episode, (2) preventing potential relapses to depression, and (3) minimizing side effects over the short and long term. The algorithm presupposes that clinicians have made an accurate diagnosis, decided how to manage contributing medical causes (including substance misuse), discontinued antidepressants, and considered the patient's childbearing potential. We propose different algorithms for mixed and nonmixed mania. Patients with mixed mania may be treated first with a second-generation antipsychotic, of which the first choice is quetiapine because of its greater efficacy for depressive symptoms and episodes in bipolar disorder. Valproate and then either lithium or carbamazepine may be added. For nonmixed mania, lithium is the first-line recommendation. A second-generation antipsychotic can be added. Again, quetiapine is favored, but if quetiapine is unacceptable, risperidone is the next choice. Olanzapine is not considered a first-line treatment due to its long-term side effects, but it could be second-line. If the patient, whether mixed or nonmixed, is still refractory to the above medications, then depending on what has already been tried, consider carbamazepine, haloperidol, olanzapine, risperidone, and valproate first tier; aripiprazole, asenapine, and ziprasidone second tier; and clozapine third tier (because of its weaker evidence base and greater side effects). Electroconvulsive therapy may be considered at any point in the algorithm if the patient has a history of positive response or is intolerant of medications.

  13. Awesome Aggregations

    ERIC Educational Resources Information Center

    Constible, Juanita; Lee, Richard E., Jr.

    2006-01-01

    Insects are a natural choice for studying behavioral ecology in the classroom--they are easy to obtain, maintain, and manipulate. Unlike competition and predation, however, the concept of group living does not translate well to small-scale experiments involving only a few individuals. How can inquiry be used to examine why animals live in groups?…

  14. How to Start a STEM Team

    ERIC Educational Resources Information Center

    Hughes, Bill

    2009-01-01

    The United States' poor performance in teaching math and science eliminates many of the best and brightest school children from the ranks of future scientists and engineers. With little chance to learn in school how science and math skills might translate into professionally useful knowledge, students are unable to make informed choices about…

  15. A Review of Factors Influencing Athletes' Food Choices.

    PubMed

    Birkenhead, Karen L; Slater, Gary

    2015-11-01

    Athletes make food choices on a daily basis that can affect both health and performance. A well planned nutrition strategy that includes the careful timing and selection of appropriate foods and fluids helps to maximize training adaptations and, thus, should be an integral part of the athlete's training programme. Factors that motivate food selection include taste, convenience, nutrition knowledge and beliefs. Food choice is also influenced by physiological, social, psychological and economic factors and varies both within and between individuals and populations. This review highlights the multidimensional nature of food choice and the depth of previous research investigating eating behaviours. Despite numerous studies with general populations, little exploration has been carried out with athletes, yet the energy demands of sport typically require individuals to make more frequent and/or appropriate food choices. While factors that are important to general populations also apply to athletes, it seems likely, given the competitive demands of sport, that performance would be an important factor influencing food choice. It is unclear if athletes place the same degree of importance on these factors or how food choice is influenced by involvement in sport. There is a clear need for further research exploring the food choice motives of athletes, preferably in conjunction with research investigating dietary intake to establish if intent translates into practice.

  16. Transformation Model Choice in Nonlinear Regression Analysis of Fluorescence-based Serial Dilution Assays

    PubMed Central

    Fong, Youyi; Yu, Xuesong

    2016-01-01

    Many modern serial dilution assays are based on fluorescence intensity (FI) readouts. We study optimal transformation model choice for fitting five parameter logistic curves (5PL) to FI-based serial dilution assay data. We first develop a generalized least squares-pseudolikelihood type algorithm for fitting heteroscedastic logistic models. Next we show that the 5PL and log 5PL functions can approximate each other well. We then compare four 5PL models with different choices of log transformation and variance modeling through a Monte Carlo study and real data. Our findings are that the optimal choice depends on the intended use of the fitted curves. PMID:27642502

  17. A graph decomposition-based approach for water distribution network optimization

    NASA Astrophysics Data System (ADS)

    Zheng, Feifei; Simpson, Angus R.; Zecchin, Aaron C.; Deuerlein, Jochen W.

    2013-04-01

    A novel optimization approach for water distribution network design is proposed in this paper. Using graph theory algorithms, a full water network is first decomposed into different subnetworks based on the connectivity of the network's components. The original whole network is simplified to a directed augmented tree, in which the subnetworks are substituted by augmented nodes and directed links are created to connect them. Differential evolution (DE) is then employed to optimize each subnetwork based on the sequence specified by the assigned directed links in the augmented tree. Rather than optimizing the original network as a whole, the subnetworks are sequentially optimized by the DE algorithm. A solution choice table is established for each subnetwork (except for the subnetwork that includes a supply node) and the optimal solution of the original whole network is finally obtained by use of the solution choice tables. Furthermore, a preconditioning algorithm is applied to the subnetworks to produce an approximately optimal solution for the original whole network. This solution specifies promising regions for the final optimization algorithm to further optimize the subnetworks. Five water network case studies are used to demonstrate the effectiveness of the proposed optimization method. A standard DE algorithm (SDE) and a genetic algorithm (GA) are applied to each case study without network decomposition to enable a comparison with the proposed method. The results show that the proposed method consistently outperforms the SDE and GA (both with tuned parameters) in terms of both the solution quality and efficiency.

  18. The influence of digital filter type, amplitude normalisation method, and co-contraction algorithm on clinically relevant surface electromyography data during clinical movement assessments.

    PubMed

    Devaprakash, Daniel; Weir, Gillian J; Dunne, James J; Alderson, Jacqueline A; Donnelly, Cyril J

    2016-12-01

    There is a large and growing body of surface electromyography (sEMG) research using laboratory-specific signal processing procedures (i.e., digital filter type and amplitude normalisation protocols) and data analyses methods (i.e., co-contraction algorithms) to acquire practically meaningful information from these data. As a result, the ability to compare sEMG results between studies is, and continues to be challenging. The aim of this study was to determine if digital filter type, amplitude normalisation method, and co-contraction algorithm could influence the practical or clinical interpretation of processed sEMG data. Sixteen elite female athletes were recruited. During data collection, sEMG data was recorded from nine lower limb muscles while completing a series of calibration and clinical movement assessment trials (running and sidestepping). Three analyses were conducted: (1) signal processing with two different digital filter types (Butterworth or critically damped), (2) three amplitude normalisation methods, and (3) three co-contraction ratio algorithms. Results showed the choice of digital filter did not influence the clinical interpretation of sEMG; however, choice of amplitude normalisation method and co-contraction algorithm did influence the clinical interpretation of the running and sidestepping task. Care is recommended when choosing amplitude normalisation method and co-contraction algorithms if researchers/clinicians are interested in comparing sEMG data between studies. Copyright © 2016 Elsevier Ltd. All rights reserved.

  19. Hetero-association for pattern translation

    NASA Astrophysics Data System (ADS)

    Yu, Francis T. S.; Lu, Thomas T.; Yang, Xiangyang

    1991-09-01

    A hetero-association neural network using an interpattern association algorithm is presented. By using simple logical rules, hetero-association memory can be constructed based on the association between the input-output reference patterns. For optical implementation, a compact size liquid crystal television neural network is used. Translations between the English letters and the Chinese characters as well as Arabic and Chinese numerics are demonstrated. The authors have shown that the hetero-association model can perform more effectively in comparison to the Hopfield model in retrieving large numbers of similar patterns.

  20. Hybrid intelligent methodology to design translation invariant morphological operators for Brazilian stock market prediction.

    PubMed

    Araújo, Ricardo de A

    2010-12-01

    This paper presents a hybrid intelligent methodology to design increasing translation invariant morphological operators applied to Brazilian stock market prediction (overcoming the random walk dilemma). The proposed Translation Invariant Morphological Robust Automatic phase-Adjustment (TIMRAA) method consists of a hybrid intelligent model composed of a Modular Morphological Neural Network (MMNN) with a Quantum-Inspired Evolutionary Algorithm (QIEA), which searches for the best time lags to reconstruct the phase space of the time series generator phenomenon and determines the initial (sub-optimal) parameters of the MMNN. Each individual of the QIEA population is further trained by the Back Propagation (BP) algorithm to improve the MMNN parameters supplied by the QIEA. Also, for each prediction model generated, it uses a behavioral statistical test and a phase fix procedure to adjust time phase distortions observed in stock market time series. Furthermore, an experimental analysis is conducted with the proposed method through four Brazilian stock market time series, and the achieved results are discussed and compared to results found with random walk models and the previously introduced Time-delay Added Evolutionary Forecasting (TAEF) and Morphological-Rank-Linear Time-lag Added Evolutionary Forecasting (MRLTAEF) methods. Copyright © 2010 Elsevier Ltd. All rights reserved.

  1. Wavefront sensing with a thin diffuser

    NASA Astrophysics Data System (ADS)

    Berto, Pascal; Rigneault, Hervé; Guillon, Marc

    2017-12-01

    We propose and implement a broadband, compact, and low-cost wavefront sensing scheme by simply placing a thin diffuser in the close vicinity of a camera. The local wavefront gradient is determined from the local translation of the speckle pattern. The translation vector map is computed thanks to a fast diffeomorphic image registration algorithm and integrated to reconstruct the wavefront profile. The simple translation of speckle grains under local wavefront tip/tilt is ensured by the so-called "memory effect" of the diffuser. Quantitative wavefront measurements are experimentally demonstrated both for the few first Zernike polynomials and for phase-imaging applications requiring high resolution. We finally provided a theoretical description of the resolution limit that is supported experimentally.

  2. Optical double-image cryptography based on diffractive imaging with a laterally-translated phase grating.

    PubMed

    Chen, Wen; Chen, Xudong; Sheppard, Colin J R

    2011-10-10

    In this paper, we propose a method using structured-illumination-based diffractive imaging with a laterally-translated phase grating for optical double-image cryptography. An optical cryptosystem is designed, and multiple random phase-only masks are placed in the optical path. When a phase grating is laterally translated just before the plaintexts, several diffraction intensity patterns (i.e., ciphertexts) can be correspondingly obtained. During image decryption, an iterative retrieval algorithm is developed to extract plaintexts from the ciphertexts. In addition, security and advantages of the proposed method are analyzed. Feasibility and effectiveness of the proposed method are demonstrated by numerical simulation results. © 2011 Optical Society of America

  3. Industrial production of clotting factors: Challenges of expression, and choice of host cells.

    PubMed

    Kumar, Sampath R

    2015-07-01

    The development of recombinant forms of blood coagulation factors as safer alternatives to plasma derived factors marked a major advance in the treatment of common coagulation disorders. These are complex proteins, mostly enzymes or co-enzymes, involving multiple post-translational modifications, and therefore are difficult to express. This article reviews the nature of the expression challenges for the industrial production of these factors, vis-à-vis the translational and post-translational bottlenecks, as well as the choice of host cell lines for high-fidelity production. For achieving high productivities of vitamin K dependent proteins, which include factors II (prothrombin), VII, IX and X, and protein C, host cell limitation of γ-glutamyl carboxylation is a major bottleneck. Despite progress in addressing this, involvement of yet unidentified protein(s) impedes a complete cell engineering solution. Human factor VIII expresses at very low levels due to limitations at several steps in the protein secretion pathway. Protein and cell engineering, vector improvement and alternate host cells promise improvement in the productivity. Production of Von Willebrand factor is constrained by its large size, complex structure, and the need for extensive glycosylation and disulfide-bonded oligomerization. All the licensed therapeutic factors are produced in CHO, BHK or HEK293 cells. While HEK293 is a recent adoption, BHK cells appear to be disfavored. Copyright © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  4. An adaptive regularization parameter choice strategy for multispectral bioluminescence tomography

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Feng Jinchao; Qin Chenghu; Jia Kebin

    2011-11-15

    Purpose: Bioluminescence tomography (BLT) provides an effective tool for monitoring physiological and pathological activities in vivo. However, the measured data in bioluminescence imaging are corrupted by noise. Therefore, regularization methods are commonly used to find a regularized solution. Nevertheless, for the quality of the reconstructed bioluminescent source obtained by regularization methods, the choice of the regularization parameters is crucial. To date, the selection of regularization parameters remains challenging. With regards to the above problems, the authors proposed a BLT reconstruction algorithm with an adaptive parameter choice rule. Methods: The proposed reconstruction algorithm uses a diffusion equation for modeling the bioluminescentmore » photon transport. The diffusion equation is solved with a finite element method. Computed tomography (CT) images provide anatomical information regarding the geometry of the small animal and its internal organs. To reduce the ill-posedness of BLT, spectral information and the optimal permissible source region are employed. Then, the relationship between the unknown source distribution and multiview and multispectral boundary measurements is established based on the finite element method and the optimal permissible source region. Since the measured data are noisy, the BLT reconstruction is formulated as l{sub 2} data fidelity and a general regularization term. When choosing the regularization parameters for BLT, an efficient model function approach is proposed, which does not require knowledge of the noise level. This approach only requests the computation of the residual and regularized solution norm. With this knowledge, we construct the model function to approximate the objective function, and the regularization parameter is updated iteratively. Results: First, the micro-CT based mouse phantom was used for simulation verification. Simulation experiments were used to illustrate why multispectral data were used rather than monochromatic data. Furthermore, the study conducted using an adaptive regularization parameter demonstrated our ability to accurately localize the bioluminescent source. With the adaptively estimated regularization parameter, the reconstructed center position of the source was (20.37, 31.05, 12.95) mm, and the distance to the real source was 0.63 mm. The results of the dual-source experiments further showed that our algorithm could localize the bioluminescent sources accurately. The authors then presented experimental evidence that the proposed algorithm exhibited its calculated efficiency over the heuristic method. The effectiveness of the new algorithm was also confirmed by comparing it with the L-curve method. Furthermore, various initial speculations regarding the regularization parameter were used to illustrate the convergence of our algorithm. Finally, in vivo mouse experiment further illustrates the effectiveness of the proposed algorithm. Conclusions: Utilizing numerical, physical phantom and in vivo examples, we demonstrated that the bioluminescent sources could be reconstructed accurately with automatic regularization parameters. The proposed algorithm exhibited superior performance than both the heuristic regularization parameter choice method and L-curve method based on the computational speed and localization error.« less

  5. Choice of reconstructed tissue properties affects interpretation of lung EIT images.

    PubMed

    Grychtol, Bartłomiej; Adler, Andy

    2014-06-01

    Electrical impedance tomography (EIT) estimates an image of change in electrical properties within a body from stimulations and measurements at surface electrodes. There is significant interest in EIT as a tool to monitor and guide ventilation therapy in mechanically ventilated patients. In lung EIT, the EIT inverse problem is commonly linearized and only changes in electrical properties are reconstructed. Early algorithms reconstructed changes in resistivity, while most recent work using the finite element method reconstructs conductivity. Recently, we demonstrated that EIT images of ventilation can be misleading if the electrical contrasts within the thorax are not taken into account during the image reconstruction process. In this paper, we explore the effect of the choice of the reconstructed electrical properties (resistivity or conductivity) on the resulting EIT images. We show in simulation and experimental data that EIT images reconstructed with the same algorithm but with different parametrizations lead to large and clinically significant differences in the resulting images, which persist even after attempts to eliminate the impact of the parameter choice by recovering volume changes from the EIT images. Since there is no consensus among the most popular reconstruction algorithms and devices regarding the parametrization, this finding has implications for potential clinical use of EIT. We propose a program of research to develop reconstruction techniques that account for both the relationship between air volume and electrical properties of the lung and artefacts introduced by the linearization.

  6. Active learning: learning a motor skill without a coach.

    PubMed

    Huang, Vincent S; Shadmehr, Reza; Diedrichsen, Jörn

    2008-08-01

    When we learn a new skill (e.g., golf) without a coach, we are "active learners": we have to choose the specific components of the task on which to train (e.g., iron, driver, putter, etc.). What guides our selection of the training sequence? How do choices that people make compare with choices made by machine learning algorithms that attempt to optimize performance? We asked subjects to learn the novel dynamics of a robotic tool while moving it in four directions. They were instructed to choose their practice directions to maximize their performance in subsequent tests. We found that their choices were strongly influenced by motor errors: subjects tended to immediately repeat an action if that action had produced a large error. This strategy was correlated with better performance on test trials. However, even when participants performed perfectly on a movement, they did not avoid repeating that movement. The probability of repeating an action did not drop below chance even when no errors were observed. This behavior led to suboptimal performance. It also violated a strong prediction of current machine learning algorithms, which solve the active learning problem by choosing a training sequence that will maximally reduce the learner's uncertainty about the task. While we show that these algorithms do not provide an adequate description of human behavior, our results suggest ways to improve human motor learning by helping people choose an optimal training sequence.

  7. Two Improved Algorithms for Envelope and Wavefront Reduction

    NASA Technical Reports Server (NTRS)

    Kumfert, Gary; Pothen, Alex

    1997-01-01

    Two algorithms for reordering sparse, symmetric matrices or undirected graphs to reduce envelope and wavefront are considered. The first is a combinatorial algorithm introduced by Sloan and further developed by Duff, Reid, and Scott; we describe enhancements to the Sloan algorithm that improve its quality and reduce its run time. Our test problems fall into two classes with differing asymptotic behavior of their envelope parameters as a function of the weights in the Sloan algorithm. We describe an efficient 0(nlogn + m) time implementation of the Sloan algorithm, where n is the number of rows (vertices), and m is the number of nonzeros (edges). On a collection of test problems, the improved Sloan algorithm required, on the average, only twice the time required by the simpler Reverse Cuthill-Mckee algorithm while improving the mean square wavefront by a factor of three. The second algorithm is a hybrid that combines a spectral algorithm for envelope and wavefront reduction with a refinement step that uses a modified Sloan algorithm. The hybrid algorithm reduces the envelope size and mean square wavefront obtained from the Sloan algorithm at the cost of greater running times. We illustrate how these reductions translate into tangible benefits for frontal Cholesky factorization and incomplete factorization preconditioning.

  8. Stochastic reaction-diffusion algorithms for macromolecular crowding

    NASA Astrophysics Data System (ADS)

    Sturrock, Marc

    2016-06-01

    Compartment-based (lattice-based) reaction-diffusion algorithms are often used for studying complex stochastic spatio-temporal processes inside cells. In this paper the influence of macromolecular crowding on stochastic reaction-diffusion simulations is investigated. Reaction-diffusion processes are considered on two different kinds of compartmental lattice, a cubic lattice and a hexagonal close packed lattice, and solved using two different algorithms, the stochastic simulation algorithm and the spatiocyte algorithm (Arjunan and Tomita 2010 Syst. Synth. Biol. 4, 35-53). Obstacles (modelling macromolecular crowding) are shown to have substantial effects on the mean squared displacement and average number of molecules in the domain but the nature of these effects is dependent on the choice of lattice, with the cubic lattice being more susceptible to the effects of the obstacles. Finally, improvements for both algorithms are presented.

  9. Vega roll and attitude control system algorithms trade-off study

    NASA Astrophysics Data System (ADS)

    Paulino, N.; Cuciniello, G.; Cruciani, I.; Corraro, F.; Spallotta, D.; Nebula, F.

    2013-12-01

    This paper describes the trade-off study for the selection of the most suitable algorithms for the Roll and Attitude Control System (RACS) within the FPS-A program, aimed at developing the new Flight Program Software of VEGA Launcher. Two algorithms were analyzed: Switching Lines (SL) and Quaternion Feedback Regulation. Using a development simulation tool that models two critical flight phases (Long Coasting Phase (LCP) and Payload Release (PLR) Phase), both algorithms were assessed with Monte Carlo batch simulations for both of the phases. The statistical outcomes of the results demonstrate a 100 percent success rate for Quaternion Feedback Regulation, and support the choice of this method.

  10. The Fortran-P Translator: Towards Automatic Translation of Fortran 77 Programs for Massively Parallel Processors

    DOE PAGES

    O'keefe, Matthew; Parr, Terence; Edgar, B. Kevin; ...

    1995-01-01

    Massively parallel processors (MPPs) hold the promise of extremely high performance that, if realized, could be used to study problems of unprecedented size and complexity. One of the primary stumbling blocks to this promise has been the lack of tools to translate application codes to MPP form. In this article we show how applications codes written in a subset of Fortran 77, called Fortran-P, can be translated to achieve good performance on several massively parallel machines. This subset can express codes that are self-similar, where the algorithm applied to the global data domain is also applied to each subdomain. Wemore » have found many codes that match the Fortran-P programming style and have converted them using our tools. We believe a self-similar coding style will accomplish what a vectorizable style has accomplished for vector machines by allowing the construction of robust, user-friendly, automatic translation systems that increase programmer productivity and generate fast, efficient code for MPPs.« less

  11. Minimizing effects of methodological decisions on interpretation and prediction in species distribution studies: An example with background selection

    USGS Publications Warehouse

    Jarnevich, Catherine S.; Talbert, Marian; Morisette, Jeffrey T.; Aldridge, Cameron L.; Brown, Cynthia; Kumar, Sunil; Manier, Daniel; Talbert, Colin; Holcombe, Tracy R.

    2017-01-01

    Evaluating the conditions where a species can persist is an important question in ecology both to understand tolerances of organisms and to predict distributions across landscapes. Presence data combined with background or pseudo-absence locations are commonly used with species distribution modeling to develop these relationships. However, there is not a standard method to generate background or pseudo-absence locations, and method choice affects model outcomes. We evaluated combinations of both model algorithms (simple and complex generalized linear models, multivariate adaptive regression splines, Maxent, boosted regression trees, and random forest) and background methods (random, minimum convex polygon, and continuous and binary kernel density estimator (KDE)) to assess the sensitivity of model outcomes to choices made. We evaluated six questions related to model results, including five beyond the common comparison of model accuracy assessment metrics (biological interpretability of response curves, cross-validation robustness, independent data accuracy and robustness, and prediction consistency). For our case study with cheatgrass in the western US, random forest was least sensitive to background choice and the binary KDE method was least sensitive to model algorithm choice. While this outcome may not hold for other locations or species, the methods we used can be implemented to help determine appropriate methodologies for particular research questions.

  12. Political science. Exposure to ideologically diverse news and opinion on Facebook.

    PubMed

    Bakshy, Eytan; Messing, Solomon; Adamic, Lada A

    2015-06-05

    Exposure to news, opinion, and civic information increasingly occurs through social media. How do these online networks influence exposure to perspectives that cut across ideological lines? Using deidentified data, we examined how 10.1 million U.S. Facebook users interact with socially shared news. We directly measured ideological homophily in friend networks and examined the extent to which heterogeneous friends could potentially expose individuals to cross-cutting content. We then quantified the extent to which individuals encounter comparatively more or less diverse content while interacting via Facebook's algorithmically ranked News Feed and further studied users' choices to click through to ideologically discordant content. Compared with algorithmic ranking, individuals' choices played a stronger role in limiting exposure to cross-cutting content. Copyright © 2015, American Association for the Advancement of Science.

  13. Functionality limit of classical simulated annealing

    NASA Astrophysics Data System (ADS)

    Hasegawa, M.

    2015-09-01

    By analyzing the system dynamics in the landscape paradigm, optimization function of classical simulated annealing is reviewed on the random traveling salesman problems. The properly functioning region of the algorithm is experimentally determined in the size-time plane and the influence of its boundary on the scalability test is examined in the standard framework of this method. From both results, an empirical choice of temperature length is plausibly explained as a minimum requirement that the algorithm maintains its scalability within its functionality limit. The study exemplifies the applicability of computational physics analysis to the optimization algorithm research.

  14. Image processing via VLSI: A concept paper

    NASA Technical Reports Server (NTRS)

    Nathan, R.

    1982-01-01

    Implementing specific image processing algorithms via very large scale integrated systems offers a potent solution to the problem of handling high data rates. Two algorithms stand out as being particularly critical -- geometric map transformation and filtering or correlation. These two functions form the basis for data calibration, registration and mosaicking. VLSI presents itself as an inexpensive ancillary function to be added to almost any general purpose computer and if the geometry and filter algorithms are implemented in VLSI, the processing rate bottleneck would be significantly relieved. A set of image processing functions that limit present systems to deal with future throughput needs, translates these functions to algorithms, implements via VLSI technology and interfaces the hardware to a general purpose digital computer is developed.

  15. Traffic Flow Management Using Aggregate Flow Models and the Development of Disaggregation Methods

    NASA Technical Reports Server (NTRS)

    Sun, Dengfeng; Sridhar, Banavar; Grabbe, Shon

    2010-01-01

    A linear time-varying aggregate traffic flow model can be used to develop Traffic Flow Management (tfm) strategies based on optimization algorithms. However, there are no methods available in the literature to translate these aggregate solutions into actions involving individual aircraft. This paper describes and implements a computationally efficient disaggregation algorithm, which converts an aggregate (flow-based) solution to a flight-specific control action. Numerical results generated by the optimization method and the disaggregation algorithm are presented and illustrated by applying them to generate TFM schedules for a typical day in the U.S. National Airspace System. The results show that the disaggregation algorithm generates control actions for individual flights while keeping the air traffic behavior very close to the optimal solution.

  16. Classical simulation of infinite-size quantum lattice systems in two spatial dimensions.

    PubMed

    Jordan, J; Orús, R; Vidal, G; Verstraete, F; Cirac, J I

    2008-12-19

    We present an algorithm to simulate two-dimensional quantum lattice systems in the thermodynamic limit. Our approach builds on the projected entangled-pair state algorithm for finite lattice systems [F. Verstraete and J. I. Cirac, arxiv:cond-mat/0407066] and the infinite time-evolving block decimation algorithm for infinite one-dimensional lattice systems [G. Vidal, Phys. Rev. Lett. 98, 070201 (2007)10.1103/PhysRevLett.98.070201]. The present algorithm allows for the computation of the ground state and the simulation of time evolution in infinite two-dimensional systems that are invariant under translations. We demonstrate its performance by obtaining the ground state of the quantum Ising model and analyzing its second order quantum phase transition.

  17. A Novel Handwritten Letter Recognizer Using Enhanced Evolutionary Neural Network

    NASA Astrophysics Data System (ADS)

    Mahmoudi, Fariborz; Mirzashaeri, Mohsen; Shahamatnia, Ehsan; Faridnia, Saed

    This paper introduces a novel design for handwritten letter recognition by employing a hybrid back-propagation neural network with an enhanced evolutionary algorithm. Feeding the neural network consists of a new approach which is invariant to translation, rotation, and scaling of input letters. Evolutionary algorithm is used for the global search of the search space and the back-propagation algorithm is used for the local search. The results have been computed by implementing this approach for recognizing 26 English capital letters in the handwritings of different people. The computational results show that the neural network reaches very satisfying results with relatively scarce input data and a promising performance improvement in convergence of the hybrid evolutionary back-propagation algorithms is exhibited.

  18. Symbolic LTL Compilation for Model Checking: Extended Abstract

    NASA Technical Reports Server (NTRS)

    Rozier, Kristin Y.; Vardi, Moshe Y.

    2007-01-01

    In Linear Temporal Logic (LTL) model checking, we check LTL formulas representing desired behaviors against a formal model of the system designed to exhibit these behaviors. To accomplish this task, the LTL formulas must be translated into automata [21]. We focus on LTL compilation by investigating LTL satisfiability checking via a reduction to model checking. Having shown that symbolic LTL compilation algorithms are superior to explicit automata construction algorithms for this task [16], we concentrate here on seeking a better symbolic algorithm.We present experimental data comparing algorithmic variations such as normal forms, encoding methods, and variable ordering and examine their effects on performance metrics including processing time and scalability. Safety critical systems, such as air traffic control, life support systems, hazardous environment controls, and automotive control systems, pervade our daily lives, yet testing and simulation alone cannot adequately verify their reliability [3]. Model checking is a promising approach to formal verification for safety critical systems which involves creating a formal mathematical model of the system and translating desired safety properties into a formal specification for this model. The complement of the specification is then checked against the system model. When the model does not satisfy the specification, model-checking tools accompany this negative answer with a counterexample, which points to an inconsistency between the system and the desired behaviors and aids debugging efforts.

  19. An efficient hybrid method for stochastic reaction-diffusion biochemical systems with delay

    NASA Astrophysics Data System (ADS)

    Sayyidmousavi, Alireza; Ilie, Silvana

    2017-12-01

    Many chemical reactions, such as gene transcription and translation in living cells, need a certain time to finish once they are initiated. Simulating stochastic models of reaction-diffusion systems with delay can be computationally expensive. In the present paper, a novel hybrid algorithm is proposed to accelerate the stochastic simulation of delayed reaction-diffusion systems. The delayed reactions may be of consuming or non-consuming delay type. The algorithm is designed for moderately stiff systems in which the events can be partitioned into slow and fast subsets according to their propensities. The proposed algorithm is applied to three benchmark problems and the results are compared with those of the delayed Inhomogeneous Stochastic Simulation Algorithm. The numerical results show that the new hybrid algorithm achieves considerable speed-up in the run time and very good accuracy.

  20. System theory in industrial patient monitoring: an overview.

    PubMed

    Baura, G D

    2004-01-01

    Patient monitoring refers to the continuous observation of repeating events of physiologic function to guide therapy or to monitor the effectiveness of interventions, and is used primarily in the intensive care unit and operating room. Commonly processed signals are the electrocardiogram, intraarterial blood pressure, arterial saturation of oxygen, and cardiac output. To this day, the majority of physiologic waveform processing in patient monitors is conducted using heuristic curve fitting. However in the early 1990s, a few enterprising engineers and physicians began using system theory to improve their core processing. Applications included improvement of signal-to-noise ratio, either due to low signal levels or motion artifact, and improvement in feature detection. The goal of this mini-symposium is to review the early work in this emerging field, which has led to technologic breakthroughs. In this overview talk, the process of system theory algorithm research and development is discussed. Research for industrial monitors involves substantial data collection, with some data used for algorithm training and the remainder used for validation. Once the algorithms are validated, they are translated into detailed specifications. Development then translates these specifications into DSP code. The DSP code is verified and validated per the Good Manufacturing Practices mandated by FDA.

  1. Choice of Academic Major at a Public Research University: The Role of Gender and Self-Efficacy

    ERIC Educational Resources Information Center

    Johnson, Iryna Y.; Muse, William B.

    2017-01-01

    Females are underrepresented in certain disciplines, which translates into their having less promising career outlooks and lower earnings. This study examines the effects of socio-economic status, academic performance, high school curriculum and involvement in extra-curricular activities, as well as self-efficacy for academic achievement on…

  2. Novels and Short Stories about Work: An Annotated Bibliography.

    ERIC Educational Resources Information Center

    Koziol, Kenneth G.

    This document contains an annotated list of novels and short stories written in English or available in translation that teachers can use to help students at the secondary and college levels think critically about the world of work. The categories by which they are organized are as follows: agriculture, business, career (choices, paths, and…

  3. Fiction from the Other Americas: Bibliographic Surveys and Classroom Applications.

    ERIC Educational Resources Information Center

    Mahony, Elizabeth

    This paper contains bibliographies of Latin American fiction and classroom applications for use in a 3-week unit in an introduction to fiction class. Section 1 discusses background research, selection of materials, choice of authors, translation issues, and plans for future study and course development. Section 2 contains an annotated bibliography…

  4. Development of a Nonlinear Probability of Collision Tool for the Earth Observing System

    NASA Technical Reports Server (NTRS)

    McKinley, David P.

    2006-01-01

    The Earth Observing System (EOS) spacecraft Terra, Aqua, and Aura fly in constellation with several other spacecraft in 705-kilometer mean altitude sun-synchronous orbits. All three spacecraft are operated by the Earth Science Mission Operations (ESMO) Project at Goddard Space Flight Center (GSFC). In 2004, the ESMO project began assessing the probability of collision of the EOS spacecraft with other space objects. In addition to conjunctions with high relative velocities, the collision assessment method for the EOS spacecraft must address conjunctions with low relative velocities during potential collisions between constellation members. Probability of Collision algorithms that are based on assumptions of high relative velocities and linear relative trajectories are not suitable for these situations; therefore an algorithm for handling the nonlinear relative trajectories was developed. This paper describes this algorithm and presents results from its validation for operational use. The probability of collision is typically calculated by integrating a Gaussian probability distribution over the volume swept out by a sphere representing the size of the space objects involved in the conjunction. This sphere is defined as the Hard Body Radius. With the assumption of linear relative trajectories, this volume is a cylinder, which translates into simple limits of integration for the probability calculation. For the case of nonlinear relative trajectories, the volume becomes a complex geometry. However, with an appropriate choice of coordinate systems, the new algorithm breaks down the complex geometry into a series of simple cylinders that have simple limits of integration. This nonlinear algorithm will be discussed in detail in the paper. The nonlinear Probability of Collision algorithm was first verified by showing that, when used in high relative velocity cases, it yields similar answers to existing high relative velocity linear relative trajectory algorithms. The comparison with the existing high velocity/linear theory will also be used to determine at what relative velocity the analysis should use the new nonlinear theory in place of the existing linear theory. The nonlinear algorithm was also compared to a known exact solution for the probability of collision between two objects when the relative motion is strictly circular and the error covariance is spherically symmetric. Figure I shows preliminary results from this comparison by plotting the probabilities calculated from the new algorithm and those from the exact solution versus the Hard Body Radius to Covariance ratio. These results show about 5% error when the Hard Body Radius is equal to one half the spherical covariance magnitude. The algorithm was then combined with a high fidelity orbit state and error covariance propagator into a useful tool for analyzing low relative velocity nonlinear relative trajectories. The high fidelity propagator is capable of using atmospheric drag, central body gravitational, solar radiation, and third body forces to provide accurate prediction of the relative trajectories and covariance evolution. The covariance propagator also includes a process noise model to ensure realistic evolutions of the error covariance. This paper will describe the integration of the nonlinear probability algorithm and the propagators into a useful collision assessment tool. Finally, a hypothetical case study involving a low relative velocity conjunction between members of the Earth Observation System constellation will be presented.

  5. Acting Irrationally to Improve Performance in Stochastic Worlds

    NASA Astrophysics Data System (ADS)

    Belavkin, Roman V.

    Despite many theories and algorithms for decision-making, after estimating the utility function the choice is usually made by maximising its expected value (the max EU principle). This traditional and 'rational' conclusion of the decision-making process is compared in this paper with several 'irrational' techniques that make choice in Monte-Carlo fashion. The comparison is made by evaluating the performance of simple decision-theoretic agents in stochastic environments. It is shown that not only the random choice strategies can achieve performance comparable to the max EU method, but under certain conditions the Monte-Carlo choice methods perform almost two times better than the max EU. The paper concludes by quoting evidence from recent cognitive modelling works as well as the famous decision-making paradoxes.

  6. Specificity and Sensitivity of Claims-Based Algorithms for Identifying Members of Medicare+Choice Health Plans That Have Chronic Medical Conditions

    PubMed Central

    Rector, Thomas S; Wickstrom, Steven L; Shah, Mona; Thomas Greeenlee, N; Rheault, Paula; Rogowski, Jeannette; Freedman, Vicki; Adams, John; Escarce, José J

    2004-01-01

    Objective To examine the effects of varying diagnostic and pharmaceutical criteria on the performance of claims-based algorithms for identifying beneficiaries with hypertension, heart failure, chronic lung disease, arthritis, glaucoma, and diabetes. Study Setting Secondary 1999–2000 data from two Medicare+Choice health plans. Study Design Retrospective analysis of algorithm specificity and sensitivity. Data Collection Physician, facility, and pharmacy claims data were extracted from electronic records for a sample of 3,633 continuously enrolled beneficiaries who responded to an independent survey that included questions about chronic diseases. Principal Findings Compared to an algorithm that required a single medical claim in a one-year period that listed the diagnosis, either requiring that the diagnosis be listed on two separate claims or that the diagnosis to be listed on one claim for a face-to-face encounter with a health care provider significantly increased specificity for the conditions studied by 0.03 to 0.11. Specificity of algorithms was significantly improved by 0.03 to 0.17 when both a medical claim with a diagnosis and a pharmacy claim for a medication commonly used to treat the condition were required. Sensitivity improved significantly by 0.01 to 0.20 when the algorithm relied on a medical claim with a diagnosis or a pharmacy claim, and by 0.05 to 0.17 when two years rather than one year of claims data were analyzed. Algorithms that had specificity more than 0.95 were found for all six conditions. Sensitivity above 0.90 was not achieved all conditions. Conclusions Varying claims criteria improved the performance of case-finding algorithms for six chronic conditions. Highly specific, and sometimes sensitive, algorithms for identifying members of health plans with several chronic conditions can be developed using claims data. PMID:15533190

  7. Programming and Tuning a Quantum Annealing Device to Solve Real World Problems

    NASA Astrophysics Data System (ADS)

    Perdomo-Ortiz, Alejandro; O'Gorman, Bryan; Fluegemann, Joseph; Smelyanskiy, Vadim

    2015-03-01

    Solving real-world applications with quantum algorithms requires overcoming several challenges, ranging from translating the computational problem at hand to the quantum-machine language to tuning parameters of the quantum algorithm that have a significant impact on the performance of the device. In this talk, we discuss these challenges, strategies developed to enhance performance, and also a more efficient implementation of several applications. Although we will focus on applications of interest to NASA's Quantum Artificial Intelligence Laboratory, the methods and concepts presented here apply to a broader family of hard discrete optimization problems, including those that occur in many machine-learning algorithms.

  8. Automated mapping of pharmacy orders from two electronic health record systems to RxNorm within the STRIDE clinical data warehouse.

    PubMed

    Hernandez, Penni; Podchiyska, Tanya; Weber, Susan; Ferris, Todd; Lowe, Henry

    2009-11-14

    The Stanford Translational Research Integrated Database Environment (STRIDE) clinical data warehouse integrates medication information from two Stanford hospitals that use different drug representation systems. To merge this pharmacy data into a single, standards-based model supporting research we developed an algorithm to map HL7 pharmacy orders to RxNorm concepts. A formal evaluation of this algorithm on 1.5 million pharmacy orders showed that the system could accurately assign pharmacy orders in over 96% of cases. This paper describes the algorithm and discusses some of the causes of failures in mapping to RxNorm.

  9. A method for digital image registration using a mathematical programming technique

    NASA Technical Reports Server (NTRS)

    Yao, S. S.

    1973-01-01

    A new algorithm based on a nonlinear programming technique to correct the geometrical distortions of one digital image with respect to another is discussed. This algorithm promises to be superior to existing ones in that it is capable of treating localized differential scaling, translational and rotational errors over the whole image plane. A series of piece-wise 'rubber-sheet' approximations are used, constrained in such a manner that a smooth approximation over the entire image can be obtained. The theoretical derivation is included. The result of using the algorithm to register four channel S065 Apollo IX digitized photography over Imperial Valley, California, is discussed in detail.

  10. A GPU-Based Implementation of the Firefly Algorithm for Variable Selection in Multivariate Calibration Problems

    PubMed Central

    de Paula, Lauro C. M.; Soares, Anderson S.; de Lima, Telma W.; Delbem, Alexandre C. B.; Coelho, Clarimar J.; Filho, Arlindo R. G.

    2014-01-01

    Several variable selection algorithms in multivariate calibration can be accelerated using Graphics Processing Units (GPU). Among these algorithms, the Firefly Algorithm (FA) is a recent proposed metaheuristic that may be used for variable selection. This paper presents a GPU-based FA (FA-MLR) with multiobjective formulation for variable selection in multivariate calibration problems and compares it with some traditional sequential algorithms in the literature. The advantage of the proposed implementation is demonstrated in an example involving a relatively large number of variables. The results showed that the FA-MLR, in comparison with the traditional algorithms is a more suitable choice and a relevant contribution for the variable selection problem. Additionally, the results also demonstrated that the FA-MLR performed in a GPU can be five times faster than its sequential implementation. PMID:25493625

  11. A GPU-Based Implementation of the Firefly Algorithm for Variable Selection in Multivariate Calibration Problems.

    PubMed

    de Paula, Lauro C M; Soares, Anderson S; de Lima, Telma W; Delbem, Alexandre C B; Coelho, Clarimar J; Filho, Arlindo R G

    2014-01-01

    Several variable selection algorithms in multivariate calibration can be accelerated using Graphics Processing Units (GPU). Among these algorithms, the Firefly Algorithm (FA) is a recent proposed metaheuristic that may be used for variable selection. This paper presents a GPU-based FA (FA-MLR) with multiobjective formulation for variable selection in multivariate calibration problems and compares it with some traditional sequential algorithms in the literature. The advantage of the proposed implementation is demonstrated in an example involving a relatively large number of variables. The results showed that the FA-MLR, in comparison with the traditional algorithms is a more suitable choice and a relevant contribution for the variable selection problem. Additionally, the results also demonstrated that the FA-MLR performed in a GPU can be five times faster than its sequential implementation.

  12. Formal verification of an oral messages algorithm for interactive consistency

    NASA Technical Reports Server (NTRS)

    Rushby, John

    1992-01-01

    The formal specification and verification of an algorithm for Interactive Consistency based on the Oral Messages algorithm for Byzantine Agreement is described. We compare our treatment with that of Bevier and Young, who presented a formal specification and verification for a very similar algorithm. Unlike Bevier and Young, who observed that 'the invariant maintained in the recursive subcases of the algorithm is significantly more complicated than is suggested by the published proof' and who found its formal verification 'a fairly difficult exercise in mechanical theorem proving,' our treatment is very close to the previously published analysis of the algorithm, and our formal specification and verification are straightforward. This example illustrates how delicate choices in the formulation of the problem can have significant impact on the readability of its formal specification and on the tractability of its formal verification.

  13. Proposals for Updating Tai Algorithm

    DTIC Science & Technology

    1997-12-01

    1997 meeting, the Comiti International des Poids et Mesures (CIPM) decided to change the name of the Comiti Consultatif pour la Difinition de la ...Report of the BIPM Time Section, 1988,1, D1-D22. [2] P. Tavella, C. Thomas, Comparative study of time scale algorithms, Metrologia , 1991, 28, 57...alternative choice for implementing an upper limit of clock weights, Metrologia , 1996, 33, 227-240. [5] C. Thomas, Impact of New Clock Technologies

  14. Complete exchange on the iPSC-860

    NASA Technical Reports Server (NTRS)

    Bokhari, Shahid H.

    1991-01-01

    The implementation of complete exchange on the circuit switched Intel iPSC-860 hypercube is described. This pattern, also known as all-to-all personalized communication, is the densest requirement that can be imposed on a network. On the iPSC-860, care needs to be taken to avoid edge contention, which can have a disastrous impact on communication time. There are basically two classes of algorithms that achieve contention-free complete exchange. The first contains the classical standard exchange algorithm that is generally useful for small message sizes. The second includes a number of optimal or near-optimal algorithms that are best for large messages. Measurement of communication overhead on the iPSC-860 are given and a notation for analyzing communication link usage is developed. It is shown that for the two classes of algorithms, there is substantial variation in performance with synchronization technique and choice of message protocol. Timings of six implementations are given; each of these is useful over a particular range of message size and cube dimension. Since the complete exchange is a superset of communication patterns, these timings represent upper bounds on the time required by an arbitrary communication requirement. These results indicate that the programmer needs to evaluate several possibilities before finalizing an implementation - a careful choice can lead to very significant savings in time.

  15. A mathematical model for computer image tracking.

    PubMed

    Legters, G R; Young, T Y

    1982-06-01

    A mathematical model using an operator formulation for a moving object in a sequence of images is presented. Time-varying translation and rotation operators are derived to describe the motion. A variational estimation algorithm is developed to track the dynamic parameters of the operators. The occlusion problem is alleviated by using a predictive Kalman filter to keep the tracking on course during severe occlusion. The tracking algorithm (variational estimation in conjunction with Kalman filter) is implemented to track moving objects with occasional occlusion in computer-simulated binary images.

  16. Restoration algorithms for imaging through atmospheric turbulence

    DTIC Science & Technology

    2017-02-18

    the Fourier spectrum of each frame. The reconstructed image is then obtained by taking the inverse Fourier transform of the average of all processed...with wipξq “ Gσp|Fpviqpξq|pq řM j“1Gσp|Fpvjqpξq|pq , where F denotes the Fourier transform (ξ are the frequencies) and Gσ is a Gaussian filter of...a combination of SIFT [26] and ORSA [14] algorithms) in order to remove affine transformations (translations, rotations and homothety). The authors

  17. Cone-beam reconstruction for the two-circles-plus-one-line trajectory

    NASA Astrophysics Data System (ADS)

    Lu, Yanbin; Yang, Jiansheng; Emerson, John W.; Mao, Heng; Zhou, Tie; Si, Yuanzheng; Jiang, Ming

    2012-05-01

    The Kodak Image Station In-Vivo FX has an x-ray module with cone-beam configuration for radiographic imaging but lacks the functionality of tomography. To introduce x-ray tomography into the system, we choose the two-circles-plus-one-line trajectory by mounting one translation motor and one rotation motor. We establish a reconstruction algorithm by applying the M-line reconstruction method. Numerical studies and preliminary physical phantom experiment demonstrate the feasibility of the proposed design and reconstruction algorithm.

  18. A computational procedure for large rotational motions in multibody dynamics

    NASA Technical Reports Server (NTRS)

    Park, K. C.; Chiou, J. C.

    1987-01-01

    A computational procedure suitable for the solution of equations of motion for multibody systems is presented. The present procedure adopts a differential partitioning of the translational motions and the rotational motions. The translational equations of motion are then treated by either a conventional explicit or an implicit direct integration method. A principle feature of this procedure is a nonlinearly implicit algorithm for updating rotations via the Euler four-parameter representation. This procedure is applied to the rolling of a sphere through a specific trajectory, which shows that it yields robust solutions.

  19. A Metadata based Knowledge Discovery Methodology for Seeding Translational Research.

    PubMed

    Kothari, Cartik R; Payne, Philip R O

    2015-01-01

    In this paper, we present a semantic, metadata based knowledge discovery methodology for identifying teams of researchers from diverse backgrounds who can collaborate on interdisciplinary research projects: projects in areas that have been identified as high-impact areas at The Ohio State University. This methodology involves the semantic annotation of keywords and the postulation of semantic metrics to improve the efficiency of the path exploration algorithm as well as to rank the results. Results indicate that our methodology can discover groups of experts from diverse areas who can collaborate on translational research projects.

  20. Electrophoretic Deformation of Individual Transfer RNA Molecules Reveals Their Identity.

    PubMed

    Henley, Robert Y; Ashcroft, Brian Alan; Farrell, Ian; Cooperman, Barry S; Lindsay, Stuart M; Wanunu, Meni

    2016-01-13

    It has been hypothesized that the ribosome gains additional fidelity during protein translation by probing structural differences in tRNA species. We measure the translocation kinetics of different tRNA species through ∼3 nm diameter synthetic nanopores. Each tRNA species varies in the time scale with which it is deformed from equilibrium, as in the translocation step of protein translation. Using machine-learning algorithms, we can differentiate among five tRNA species, analyze the ratios of tRNA binary mixtures, and distinguish tRNA isoacceptors.

  1. Insights from Preclinical Choice Models on Treating Drug Addiction.

    PubMed

    Banks, Matthew L; Negus, S Stevens

    2017-02-01

    Substance-use disorders are a global public health problem that arises from behavioral misallocation between drug use and more adaptive behaviors maintained by nondrug alternatives (e.g., food or money). Preclinical drug self-administration procedures that incorporate a concurrently available nondrug reinforcer (e.g., food) provide translationally relevant and distinct dependent measures of behavioral allocation (i.e., to assess the relative reinforcing efficacy of the drug) and behavioral rate (i.e., to assess motor competence). In particular, preclinical drug versus food 'choice' procedures have produced increasingly concordant results with both human laboratory drug self-administration studies and double-blind placebo-controlled clinical trials. Accordingly, here we provide a heuristic framework of substance-use disorders based on a behavioral-centric perspective and recent insights from these preclinical choice procedures. Copyright © 2016 Elsevier Ltd. All rights reserved.

  2. An Accelerated Recursive Doubling Algorithm for Block Tridiagonal Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Seal, Sudip K

    2014-01-01

    Block tridiagonal systems of linear equations arise in a wide variety of scientific and engineering applications. Recursive doubling algorithm is a well-known prefix computation-based numerical algorithm that requires O(M^3(N/P + log P)) work to compute the solution of a block tridiagonal system with N block rows and block size M on P processors. In real-world applications, solutions of tridiagonal systems are most often sought with multiple, often hundreds and thousands, of different right hand sides but with the same tridiagonal matrix. Here, we show that a recursive doubling algorithm is sub-optimal when computing solutions of block tridiagonal systems with multiplemore » right hand sides and present a novel algorithm, called the accelerated recursive doubling algorithm, that delivers O(R) improvement when solving block tridiagonal systems with R distinct right hand sides. Since R is typically about 100 1000, this improvement translates to very significant speedups in practice. Detailed complexity analyses of the new algorithm with empirical confirmation of runtime improvements are presented. To the best of our knowledge, this algorithm has not been reported before in the literature.« less

  3. Pose estimation for augmented reality applications using genetic algorithm.

    PubMed

    Yu, Ying Kin; Wong, Kin Hong; Chang, Michael Ming Yuen

    2005-12-01

    This paper describes a genetic algorithm that tackles the pose-estimation problem in computer vision. Our genetic algorithm can find the rotation and translation of an object accurately when the three-dimensional structure of the object is given. In our implementation, each chromosome encodes both the pose and the indexes to the selected point features of the object. Instead of only searching for the pose as in the existing work, our algorithm, at the same time, searches for a set containing the most reliable feature points in the process. This mismatch filtering strategy successfully makes the algorithm more robust under the presence of point mismatches and outliers in the images. Our algorithm has been tested with both synthetic and real data with good results. The accuracy of the recovered pose is compared to the existing algorithms. Our approach outperformed the Lowe's method and the other two genetic algorithms under the presence of point mismatches and outliers. In addition, it has been used to estimate the pose of a real object. It is shown that the proposed method is applicable to augmented reality applications.

  4. Removing Ambiguities In Remotely Sensed Winds

    NASA Technical Reports Server (NTRS)

    Shaffer, Scott J.; Dunbar, Roy S.; Hsiao, Shuchi V.; Long, David G.

    1991-01-01

    Algorithm removes ambiguities in choices of candidate ocean-surface wind vectors estimated from measurements of radar backscatter from ocean waves. Increases accuracies of estimates of winds without requiring new instrumentation. Incorporates vector-median filtering function.

  5. Comparison of rule induction, decision trees and formal concept analysis approaches for classification

    NASA Astrophysics Data System (ADS)

    Kotelnikov, E. V.; Milov, V. R.

    2018-05-01

    Rule-based learning algorithms have higher transparency and easiness to interpret in comparison with neural networks and deep learning algorithms. These properties make it possible to effectively use such algorithms to solve descriptive tasks of data mining. The choice of an algorithm depends also on its ability to solve predictive tasks. The article compares the quality of the solution of the problems with binary and multiclass classification based on the experiments with six datasets from the UCI Machine Learning Repository. The authors investigate three algorithms: Ripper (rule induction), C4.5 (decision trees), In-Close (formal concept analysis). The results of the experiments show that In-Close demonstrates the best quality of classification in comparison with Ripper and C4.5, however the latter two generate more compact rule sets.

  6. Translational Models of Gambling-Related Decision-Making.

    PubMed

    Winstanley, Catharine A; Clark, Luke

    Gambling is a harmless, recreational pastime that is ubiquitous across cultures. However, for some, gambling becomes a maladaptive and compulsive, and this syndrome is conceptualized as a behavioural addiction. Laboratory models that capture the key cognitive processes involved in gambling behaviour, and that can be translated across species, have the potential to make an important contribution to both decision neuroscience and the study of addictive disorders. The Iowa gambling task has been widely used to assess human decision-making under uncertainty, and this paradigm can be successfully modelled in rodents. Similar neurobiological processes underpin choice behaviour in humans and rats, and thus, a preference for the disadvantageous "high-risk, high-reward" options may reflect meaningful vulnerability for mental health problems. However, the choice behaviour operationalized by these tasks does not necessarily approximate the vulnerability to gambling disorder (GD) per se. We consider a number of psychological challenges that apply to modelling gambling in a translational way, and evaluate the success of the existing models. Heterogeneity in the structure of gambling games, as well as in the motivations of individuals with GD, is highlighted. The potential issues with extrapolating too directly from established animal models of drug dependency are discussed, as are the inherent difficulties in validating animal models of GD in the absence of any approved treatments for GD. Further advances in modelling the cognitive biases endemic in human decision-making, which appear to be exacerbated in GD, may be a promising line of research.

  7. Efficient Reassignment of a Frequent Serine Codon in Wild-Type Escherichia coli.

    PubMed

    Ho, Joanne M; Reynolds, Noah M; Rivera, Keith; Connolly, Morgan; Guo, Li-Tao; Ling, Jiqiang; Pappin, Darryl J; Church, George M; Söll, Dieter

    2016-02-19

    Expansion of the genetic code through engineering the translation machinery has greatly increased the chemical repertoire of the proteome. This has been accomplished mainly by read-through of UAG or UGA stop codons by the noncanonical aminoacyl-tRNA of choice. While stop codon read-through involves competition with the translation release factors, sense codon reassignment entails competition with a large pool of endogenous tRNAs. We used an engineered pyrrolysyl-tRNA synthetase to incorporate 3-iodo-l-phenylalanine (3-I-Phe) at a number of different serine and leucine codons in wild-type Escherichia coli. Quantitative LC-MS/MS measurements of amino acid incorporation yields carried out in a selected reaction monitoring experiment revealed that the 3-I-Phe abundance at the Ser208AGU codon in superfolder GFP was 65 ± 17%. This method also allowed quantification of other amino acids (serine, 33 ± 17%; phenylalanine, 1 ± 1%; threonine, 1 ± 1%) that compete with 3-I-Phe at both the aminoacylation and decoding steps of translation for incorporation at the same codon position. Reassignments of different serine (AGU, AGC, UCG) and leucine (CUG) codons with the matching tRNA(Pyl) anticodon variants were met with varying success, and our findings provide a guideline for the choice of sense codons to be reassigned. Our results indicate that the 3-iodo-l-phenylalanyl-tRNA synthetase (IFRS)/tRNA(Pyl) pair can efficiently outcompete the cellular machinery to reassign select sense codons in wild-type E. coli.

  8. Normalization is a general neural mechanism for context-dependent decision making

    PubMed Central

    Louie, Kenway; Khaw, Mel W.; Glimcher, Paul W.

    2013-01-01

    Understanding the neural code is critical to linking brain and behavior. In sensory systems, divisive normalization seems to be a canonical neural computation, observed in areas ranging from retina to cortex and mediating processes including contrast adaptation, surround suppression, visual attention, and multisensory integration. Recent electrophysiological studies have extended these insights beyond the sensory domain, demonstrating an analogous algorithm for the value signals that guide decision making, but the effects of normalization on choice behavior are unknown. Here, we show that choice models using normalization generate significant (and classically irrational) choice phenomena driven by either the value or number of alternative options. In value-guided choice experiments, both monkey and human choosers show novel context-dependent behavior consistent with normalization. These findings suggest that the neural mechanism of value coding critically influences stochastic choice behavior and provide a generalizable quantitative framework for examining context effects in decision making. PMID:23530203

  9. Analyte quantification with comprehensive two-dimensional gas chromatography: assessment of methods for baseline correction, peak delineation, and matrix effect elimination for real samples.

    PubMed

    Samanipour, Saer; Dimitriou-Christidis, Petros; Gros, Jonas; Grange, Aureline; Samuel Arey, J

    2015-01-02

    Comprehensive two-dimensional gas chromatography (GC×GC) is used widely to separate and measure organic chemicals in complex mixtures. However, approaches to quantify analytes in real, complex samples have not been critically assessed. We quantified 7 PAHs in a certified diesel fuel using GC×GC coupled to flame ionization detector (FID), and we quantified 11 target chlorinated hydrocarbons in a lake water extract using GC×GC with electron capture detector (μECD), further confirmed qualitatively by GC×GC with electron capture negative chemical ionization time-of-flight mass spectrometer (ENCI-TOFMS). Target analyte peak volumes were determined using several existing baseline correction algorithms and peak delineation algorithms. Analyte quantifications were conducted using external standards and also using standard additions, enabling us to diagnose matrix effects. We then applied several chemometric tests to these data. We find that the choice of baseline correction algorithm and peak delineation algorithm strongly influence the reproducibility of analyte signal, error of the calibration offset, proportionality of integrated signal response, and accuracy of quantifications. Additionally, the choice of baseline correction and the peak delineation algorithm are essential for correctly discriminating analyte signal from unresolved complex mixture signal, and this is the chief consideration for controlling matrix effects during quantification. The diagnostic approaches presented here provide guidance for analyte quantification using GC×GC. Copyright © 2014 The Authors. Published by Elsevier B.V. All rights reserved.

  10. Evolutionary conservation of codon optimality reveals hidden signatures of cotranslational folding.

    PubMed

    Pechmann, Sebastian; Frydman, Judith

    2013-02-01

    The choice of codons can influence local translation kinetics during protein synthesis. Whether codon preference is linked to cotranslational regulation of polypeptide folding remains unclear. Here, we derive a revised translational efficiency scale that incorporates the competition between tRNA supply and demand. Applying this scale to ten closely related yeast species, we uncover the evolutionary conservation of codon optimality in eukaryotes. This analysis reveals universal patterns of conserved optimal and nonoptimal codons, often in clusters, which associate with the secondary structure of the translated polypeptides independent of the levels of expression. Our analysis suggests an evolved function for codon optimality in regulating the rhythm of elongation to facilitate cotranslational polypeptide folding, beyond its previously proposed role of adapting to the cost of expression. These findings establish how mRNA sequences are generally under selection to optimize the cotranslational folding of corresponding polypeptides.

  11. Geography and the Properties of Surfaces. The Sandwich Theorem - A Basic One for Geography.

    DTIC Science & Technology

    the nature of the Sandwich Theorem and its relationship to Geography and provides an algorithm and a complete program to achieve ’solutions.’ Also included is a translation of one work of Hugo Steinhaus . (Author)

  12. Exclusive queueing model including the choice of service windows

    NASA Astrophysics Data System (ADS)

    Tanaka, Masahiro; Yanagisawa, Daichi; Nishinari, Katsuhiro

    2018-01-01

    In a queueing system involving multiple service windows, choice behavior is a significant concern. This paper incorporates the choice of service windows into a queueing model with a floor represented by discrete cells. We contrived a logit-based choice algorithm for agents considering the numbers of agents and the distances to all service windows. Simulations were conducted with various parameters of agent choice preference for these two elements and for different floor configurations, including the floor length and the number of service windows. We investigated the model from the viewpoint of transit times and entrance block rates. The influences of the parameters on these factors were surveyed in detail and we determined that there are optimum floor lengths that minimize the transit times. In addition, we observed that the transit times were determined almost entirely by the entrance block rates. The results of the presented model are relevant to understanding queueing systems including the choice of service windows and can be employed to optimize facility design and floor management.

  13. Moving the Field Forward: A Micro-Meso-Macro Model for Critical Language Planning. The Case of Estonia

    ERIC Educational Resources Information Center

    Skerrett, Delaney Michael

    2016-01-01

    This study investigates "de facto" language policy in Estonia. It investigates how language choices at the micro (or individual) level are negotiated within the macro (or social and historical) context: how official language policy and other features of the discursive environment surrounding language and its use in Estonia translate into…

  14. Curricular Choices of Ultra-Orthodox Jewish Communities: Translating International Human Rights Law into Education Policy

    ERIC Educational Resources Information Center

    Perry-Hazan, Lotem

    2015-01-01

    This paper employs the provisions of international human rights law in order to analyse whether and how liberal states should regulate Haredi educational practices, which sanctify the exclusive focus on religious studies in schools for boys. It conceptualises the conflict between the right to acceptable education and the right to adaptable…

  15. Listening to Parents Translates into More Referrals

    ERIC Educational Resources Information Center

    Kirchner, Jo

    2012-01-01

    Today's child care and early education landscape has vastly changed from a few years ago; parents have more choices and they are better informed. More than ever before, it is important to hear what parents tell they are searching for and to act on the specific information provided. When it comes to finding the right provider for their children,…

  16. A mass graph-based approach for the identification of modified proteoforms using top-down tandem mass spectra.

    PubMed

    Kou, Qiang; Wu, Si; Tolic, Nikola; Paša-Tolic, Ljiljana; Liu, Yunlong; Liu, Xiaowen

    2017-05-01

    Although proteomics has rapidly developed in the past decade, researchers are still in the early stage of exploring the world of complex proteoforms, which are protein products with various primary structure alterations resulting from gene mutations, alternative splicing, post-translational modifications, and other biological processes. Proteoform identification is essential to mapping proteoforms to their biological functions as well as discovering novel proteoforms and new protein functions. Top-down mass spectrometry is the method of choice for identifying complex proteoforms because it provides a 'bird's eye view' of intact proteoforms. The combinatorial explosion of various alterations on a protein may result in billions of possible proteoforms, making proteoform identification a challenging computational problem. We propose a new data structure, called the mass graph, for efficient representation of proteoforms and design mass graph alignment algorithms. We developed TopMG, a mass graph-based software tool for proteoform identification by top-down mass spectrometry. Experiments on top-down mass spectrometry datasets showed that TopMG outperformed existing methods in identifying complex proteoforms. http://proteomics.informatics.iupui.edu/software/topmg/. xwliu@iupui.edu. Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com

  17. Improved quantitative analysis of spectra using a new method of obtaining derivative spectra based on a singular perturbation technique.

    PubMed

    Li, Zhigang; Wang, Qiaoyun; Lv, Jiangtao; Ma, Zhenhe; Yang, Linjuan

    2015-06-01

    Spectroscopy is often applied when a rapid quantitative analysis is required, but one challenge is the translation of raw spectra into a final analysis. Derivative spectra are often used as a preliminary preprocessing step to resolve overlapping signals, enhance signal properties, and suppress unwanted spectral features that arise due to non-ideal instrument and sample properties. In this study, to improve quantitative analysis of near-infrared spectra, derivatives of noisy raw spectral data need to be estimated with high accuracy. A new spectral estimator based on singular perturbation technique, called the singular perturbation spectra estimator (SPSE), is presented, and the stability analysis of the estimator is given. Theoretical analysis and simulation experimental results confirm that the derivatives can be estimated with high accuracy using this estimator. Furthermore, the effectiveness of the estimator for processing noisy infrared spectra is evaluated using the analysis of beer spectra. The derivative spectra of the beer and the marzipan are used to build the calibration model using partial least squares (PLS) modeling. The results show that the PLS based on the new estimator can achieve better performance compared with the Savitzky-Golay algorithm and can serve as an alternative choice for quantitative analytical applications.

  18. Zero-Time Renal Transplant Biopsies: A Comprehensive Review.

    PubMed

    Naesens, Maarten

    2016-07-01

    Zero-time kidney biopsies, obtained at time of transplantation, are performed in many transplant centers worldwide. Decisions on kidney discard, kidney allocation, and choice of peritransplant and posttransplant treatment are sometimes based on the histological information obtained from these biopsies. This comprehensive review evaluates the practical considerations of performing zero-time biopsies, the predictive performance of zero-time histology and composite histological scores, and the clinical utility of these biopsies. The predictive performance of individual histological lesions and of composite scores for posttransplant outcome is at best moderate. No single histological lesion or composite score is sufficiently robust to be included in algorithms for kidney discard. Dual kidney transplantation has been based on histological assessment of zero-time biopsies and improves outcome in individual patients, but the waitlist effects of this strategy remain obscure. Zero-time biopsies are valuable for clinical and translational research purposes, providing insight in risk factors for posttransplant events, and as baseline for comparison with posttransplant histology. The molecular phenotype of zero-time biopsies yields novel therapeutic targets for improvement of donor selection, peritransplant management and kidney preservation. It remains however highly unclear whether the molecular expression variation in zero-time biopsies could become a better predictor for posttransplant outcome than donor/recipient baseline demographic factors.

  19. Comparison of low‐dose, half‐rotation, cone‐beam CT with electronic portal imaging device for registration of fiducial markers during prostate radiotherapy

    PubMed Central

    Wee, Leonard; Hackett, Sara Lyons; Jones, Andrew; Lim, Tee Sin; Harper, Christopher Stirling

    2013-01-01

    This study evaluated the agreement of fiducial marker localization between two modalities — an electronic portal imaging device (EPID) and cone‐beam computed tomography (CBCT) — using a low‐dose, half‐rotation scanning protocol. Twenty‐five prostate cancer patients with implanted fiducial markers were enrolled. Before each daily treatment, EPID and half‐rotation CBCT images were acquired. Translational shifts were computed for each modality and two marker‐matching algorithms, seed‐chamfer and grey‐value, were performed for each set of CBCT images. The localization offsets, and systematic and random errors from both modalities were computed. Localization performances for both modalities were compared using Bland‐Altman limits of agreement (LoA) analysis, Deming regression analysis, and Cohen's kappa inter‐rater analysis. The differences in the systematic and random errors between the modalities were within 0.2 mm in all directions. The LoA analysis revealed a 95% agreement limit of the modalities of 2 to 3.5 mm in any given translational direction. Deming regression analysis demonstrated that constant biases existed in the shifts computed by the modalities in the superior–inferior (SI) direction, but no significant proportional biases were identified in any direction. Cohen's kappa analysis showed good agreement between the modalities in prescribing translational corrections of the couch at 3 and 5 mm action levels. Images obtained from EPID and half‐rotation CBCT showed acceptable agreement for registration of fiducial markers. The seed‐chamfer algorithm for tracking of fiducial markers in CBCT datasets yielded better agreement than the grey‐value matching algorithm with EPID‐based registration. PACS numbers: 87.55.km, 87.55.Qr PMID:23835391

  20. Construction project selection with the use of fuzzy preference relation

    NASA Astrophysics Data System (ADS)

    Ibadov, Nabi

    2016-06-01

    In the article, author describes the problem of the construction project variant selection during pre-investment phase. As a solution, the algorithm basing on fuzzy preference relation is presented. The article provides an example of the algorithm used for selection of the best variant for construction project. The choice is made basing on criteria such as: net present value (NPV), level of technological difficulty, financing possibilities, and level of organizational difficulty.

  1. Evaluation of Semantic Web Technologies for Storing Computable Definitions of Electronic Health Records Phenotyping Algorithms.

    PubMed

    Papež, Václav; Denaxas, Spiros; Hemingway, Harry

    2017-01-01

    Electronic Health Records are electronic data generated during or as a byproduct of routine patient care. Structured, semi-structured and unstructured EHR offer researchers unprecedented phenotypic breadth and depth and have the potential to accelerate the development of precision medicine approaches at scale. A main EHR use-case is defining phenotyping algorithms that identify disease status, onset and severity. Phenotyping algorithms utilize diagnoses, prescriptions, laboratory tests, symptoms and other elements in order to identify patients with or without a specific trait. No common standardized, structured, computable format exists for storing phenotyping algorithms. The majority of algorithms are stored as human-readable descriptive text documents making their translation to code challenging due to their inherent complexity and hinders their sharing and re-use across the community. In this paper, we evaluate the two key Semantic Web Technologies, the Web Ontology Language and the Resource Description Framework, for enabling computable representations of EHR-driven phenotyping algorithms.

  2. Orientation estimation algorithm applied to high-spin projectiles

    NASA Astrophysics Data System (ADS)

    Long, D. F.; Lin, J.; Zhang, X. M.; Li, J.

    2014-06-01

    High-spin projectiles are low cost military weapons. Accurate orientation information is critical to the performance of the high-spin projectiles control system. However, orientation estimators have not been well translated from flight vehicles since they are too expensive, lack launch robustness, do not fit within the allotted space, or are too application specific. This paper presents an orientation estimation algorithm specific for these projectiles. The orientation estimator uses an integrated filter to combine feedback from a three-axis magnetometer, two single-axis gyros and a GPS receiver. As a new feature of this algorithm, the magnetometer feedback estimates roll angular rate of projectile. The algorithm also incorporates online sensor error parameter estimation performed simultaneously with the projectile attitude estimation. The second part of the paper deals with the verification of the proposed orientation algorithm through numerical simulation and experimental tests. Simulations and experiments demonstrate that the orientation estimator can effectively estimate the attitude of high-spin projectiles. Moreover, online sensor calibration significantly enhances the estimation performance of the algorithm.

  3. An evaluation of the NQF Quality Data Model for representing Electronic Health Record driven phenotyping algorithms.

    PubMed

    Thompson, William K; Rasmussen, Luke V; Pacheco, Jennifer A; Peissig, Peggy L; Denny, Joshua C; Kho, Abel N; Miller, Aaron; Pathak, Jyotishman

    2012-01-01

    The development of Electronic Health Record (EHR)-based phenotype selection algorithms is a non-trivial and highly iterative process involving domain experts and informaticians. To make it easier to port algorithms across institutions, it is desirable to represent them using an unambiguous formal specification language. For this purpose we evaluated the recently developed National Quality Forum (NQF) information model designed for EHR-based quality measures: the Quality Data Model (QDM). We selected 9 phenotyping algorithms that had been previously developed as part of the eMERGE consortium and translated them into QDM format. Our study concluded that the QDM contains several core elements that make it a promising format for EHR-driven phenotyping algorithms for clinical research. However, we also found areas in which the QDM could be usefully extended, such as representing information extracted from clinical text, and the ability to handle algorithms that do not consist of Boolean combinations of criteria.

  4. The routing, modulation level, and spectrum allocation algorithm in the virtual optical network mapping

    NASA Astrophysics Data System (ADS)

    Wang, Yunyun; Li, Hui; Liu, Yuze; Ji, Yuefeng; Li, Hongfa

    2017-10-01

    With the development of large video services and cloud computing, the network is increasingly in the form of services. In SDON, the SDN controller holds the underlying physical resource information, thus allocating the appropriate resources and bandwidth to the VON service. However, for some services that require extremely strict QoT (quality of transmission), the shortest distance path algorithm is often unable to meet the requirements because it does not take the link spectrum resources into account. And in accordance with the choice of the most unoccupied links, there may be more spectrum fragments. So here we propose a new RMLSA (the routing, modulation Level, and spectrum allocation) algorithm to reduce the blocking probability. The results show about 40% less blocking probability than the shortest-distance algorithm and the minimum usage of the spectrum priority algorithm. This algorithm is used to satisfy strict request of QoT for demands.

  5. Translational medicine: science or wishful thinking?

    PubMed Central

    Wehling, Martin

    2008-01-01

    "Translational medicine" as a fashionable term is being increasingly used to describe the wish of biomedical researchers to ultimately help patients. Despite increased efforts and investments into R&D, the output of novel medicines has been declining dramatically over the past years. Improvement of translation is thought to become a remedy as one of the reasons for this widening gap between input and output is the difficult transition between preclinical ("basic") and clinical stages in the R&D process. Animal experiments, test tube analyses and early human trials do simply not reflect the patient situation well enough to reliably predict efficacy and safety of a novel compound or device. This goal, however, can only be achieved if the translational processes are scientifically backed up by robust methods some of which still need to be developed. This mainly relates to biomarker development and predictivity assessment, biostatistical methods, smart and accelerated early human study designs and decision algorithms among other features. It is therefore claimed that a new science needs to be developed called 'translational science in medicine'. PMID:18559092

  6. CONNJUR spectrum translator: an open source application for reformatting NMR spectral data.

    PubMed

    Nowling, Ronald J; Vyas, Jay; Weatherby, Gerard; Fenwick, Matthew W; Ellis, Heidi J C; Gryk, Michael R

    2011-05-01

    NMR spectroscopists are hindered by the lack of standardization for spectral data among the file formats for various NMR data processing tools. This lack of standardization is cumbersome as researchers must perform their own file conversion in order to switch between processing tools and also restricts the combination of tools employed if no conversion option is available. The CONNJUR Spectrum Translator introduces a new, extensible architecture for spectrum translation and introduces two key algorithmic improvements. This first is translation of NMR spectral data (time and frequency domain) to a single in-memory data model to allow addition of new file formats with two converter modules, a reader and a writer, instead of writing a separate converter to each existing format. Secondly, the use of layout descriptors allows a single fid data translation engine to be used for all formats. For the end user, sophisticated metadata readers allow conversion of the majority of files with minimum user configuration. The open source code is freely available at http://connjur.sourceforge.net for inspection and extension.

  7. Translational Modeling to Guide Study Design and Dose Choice in Obesity Exemplified by AZD1979, a Melanin‐concentrating Hormone Receptor 1 Antagonist

    PubMed Central

    Trägårdh, M; Lindén, D; Ploj, K; Johansson, A; Turnbull, A; Carlsson, B; Antonsson, M

    2017-01-01

    In this study, we present the translational modeling used in the discovery of AZD1979, a melanin‐concentrating hormone receptor 1 (MCHr1) antagonist aimed for treatment of obesity. The model quantitatively connects the relevant biomarkers and thereby closes the scaling path from rodent to man, as well as from dose to effect level. The complexity of individual modeling steps depends on the quality and quantity of data as well as the prior information; from semimechanistic body‐composition models to standard linear regression. Key predictions are obtained by standard forward simulation (e.g., predicting effect from exposure), as well as non‐parametric input estimation (e.g., predicting energy intake from longitudinal body‐weight data), across species. The work illustrates how modeling integrates data from several species, fills critical gaps between biomarkers, and supports experimental design and human dose‐prediction. We believe this approach can be of general interest for translation in the obesity field, and might inspire translational reasoning more broadly. PMID:28556607

  8. The low noise limit in gene expression

    DOE PAGES

    Dar, Roy D.; Weinberger, Leor S.; Cox, Chris D.; ...

    2015-10-21

    Protein noise measurements are increasingly used to elucidate biophysical parameters. Unfortunately noise analyses are often at odds with directly measured parameters. Here we show that these inconsistencies arise from two problematic analytical choices: (i) the assumption that protein translation rate is invariant for different proteins of different abundances, which has inadvertently led to (ii) the assumption that a large constitutive extrinsic noise sets the low noise limit in gene expression. While growing evidence suggests that transcriptional bursting may set the low noise limit, variability in translational bursting has been largely ignored. We show that genome-wide systematic variation in translational efficiencymore » can-and in the case of E. coli does-control the low noise limit in gene expression. Therefore constitutive extrinsic noise is small and only plays a role in the absence of a systematic variation in translational efficiency. Lastly, these results show the existence of two distinct expression noise patterns: (1) a global noise floor uniformly imposed on all genes by expression bursting; and (2) high noise distributed to only a select group of genes.« less

  9. Iris recognition using image moments and k-means algorithm.

    PubMed

    Khan, Yaser Daanial; Khan, Sher Afzal; Ahmad, Farooq; Islam, Saeed

    2014-01-01

    This paper presents a biometric technique for identification of a person using the iris image. The iris is first segmented from the acquired image of an eye using an edge detection algorithm. The disk shaped area of the iris is transformed into a rectangular form. Described moments are extracted from the grayscale image which yields a feature vector containing scale, rotation, and translation invariant moments. Images are clustered using the k-means algorithm and centroids for each cluster are computed. An arbitrary image is assumed to belong to the cluster whose centroid is the nearest to the feature vector in terms of Euclidean distance computed. The described model exhibits an accuracy of 98.5%.

  10. Iris Recognition Using Image Moments and k-Means Algorithm

    PubMed Central

    Khan, Yaser Daanial; Khan, Sher Afzal; Ahmad, Farooq; Islam, Saeed

    2014-01-01

    This paper presents a biometric technique for identification of a person using the iris image. The iris is first segmented from the acquired image of an eye using an edge detection algorithm. The disk shaped area of the iris is transformed into a rectangular form. Described moments are extracted from the grayscale image which yields a feature vector containing scale, rotation, and translation invariant moments. Images are clustered using the k-means algorithm and centroids for each cluster are computed. An arbitrary image is assumed to belong to the cluster whose centroid is the nearest to the feature vector in terms of Euclidean distance computed. The described model exhibits an accuracy of 98.5%. PMID:24977221

  11. Doubling down on peptide phosphorylation as a variable mass modification

    USDA-ARS?s Scientific Manuscript database

    Some mass spectrometrists believe that searching for variable post-translational modifications like phosphorylation of serine or threonine when using database-search algorithms to interpret peptide tandem mass spectra will increase false positive rates. The basis for this is the premise that the al...

  12. A Brain-Machine Interface Operating with a Real-Time Spiking Neural Network Control Algorithm.

    PubMed

    Dethier, Julie; Nuyujukian, Paul; Eliasmith, Chris; Stewart, Terry; Elassaad, Shauki A; Shenoy, Krishna V; Boahen, Kwabena

    2011-01-01

    Motor prostheses aim to restore function to disabled patients. Despite compelling proof of concept systems, barriers to clinical translation remain. One challenge is to develop a low-power, fully-implantable system that dissipates only minimal power so as not to damage tissue. To this end, we implemented a Kalman-filter based decoder via a spiking neural network (SNN) and tested it in brain-machine interface (BMI) experiments with a rhesus monkey. The Kalman filter was trained to predict the arm's velocity and mapped on to the SNN using the Neural Engineering Framework (NEF). A 2,000-neuron embedded Matlab SNN implementation runs in real-time and its closed-loop performance is quite comparable to that of the standard Kalman filter. The success of this closed-loop decoder holds promise for hardware SNN implementations of statistical signal processing algorithms on neuromorphic chips, which may offer power savings necessary to overcome a major obstacle to the successful clinical translation of neural motor prostheses.

  13. Synthetic aperture tomographic phase microscopy for 3D imaging of live cells in translational motion

    PubMed Central

    Lue, Niyom; Choi, Wonshik; Popescu, Gabriel; Badizadegan, Kamran; Dasari, Ramachandra R.; Feld, Michael S.

    2009-01-01

    We present a technique for 3D imaging of live cells in translational motion without need of axial scanning of objective lens. A set of transmitted electric field images of cells at successive points of transverse translation is taken with a focused beam illumination. Based on Hyugens’ principle, angular plane waves are synthesized from E-field images of a focused beam. For a set of synthesized angular plane waves, we apply a filtered back-projection algorithm and obtain 3D maps of refractive index of live cells. This technique, which we refer to as synthetic aperture tomographic phase microscopy, can potentially be combined with flow cytometry or microfluidic devices, and will enable high throughput acquisition of quantitative refractive index data from large numbers of cells. PMID:18825263

  14. A method to track rotational motion for use in single-molecule biophysics.

    PubMed

    Lipfert, Jan; Kerssemakers, Jacob J W; Rojer, Maylon; Dekker, Nynke H

    2011-10-01

    The double helical nature of DNA links many cellular processes such as DNA replication, transcription, and repair to rotational motion and the accumulation of torsional strain. Magnetic tweezers (MTs) are a single-molecule technique that enables the application of precisely calibrated stretching forces to nucleic acid tethers and to control their rotational motion. However, conventional magnetic tweezers do not directly monitor rotation or measure torque. Here, we describe a method to directly measure rotational motion of particles in MT. The method relies on attaching small, non-magnetic beads to the magnetic beads to act as fiducial markers for rotational tracking. CCD images of the beads are analyzed with a tracking algorithm specifically designed to minimize crosstalk between translational and rotational motion: first, the in-plane center position of the magnetic bead is determined with a kernel-based tracker, while subsequently the height and rotation angle of the bead are determined via correlation-based algorithms. Evaluation of the tracking algorithm using both simulated images and recorded images of surface-immobilized beads demonstrates a rotational resolution of 0.1°, while maintaining a translational resolution of 1-2 nm. Example traces of the rotational fluctuations exhibited by DNA-tethered beads confined in magnetic potentials of varying stiffness demonstrate the robustness of the method and the potential for simultaneous tracking of multiple beads. Our rotation tracking algorithm enables the extension of MTs to magnetic torque tweezers (MTT) to directly measure the torque in single molecules. In addition, we envision uses of the algorithm in a range of biophysical measurements, including further extensions of MT, tethered particle motion, and optical trapping measurements.

  15. Multiscale registration algorithm for alignment of meshes

    NASA Astrophysics Data System (ADS)

    Vadde, Srikanth; Kamarthi, Sagar V.; Gupta, Surendra M.

    2004-03-01

    Taking a multi-resolution approach, this research work proposes an effective algorithm for aligning a pair of scans obtained by scanning an object's surface from two adjacent views. This algorithm first encases each scan in the pair with an array of cubes of equal and fixed size. For each scan in the pair a surrogate scan is created by the centroids of the cubes that encase the scan. The Gaussian curvatures of points across the surrogate scan pair are compared to find the surrogate corresponding points. If the difference between the Gaussian curvatures of any two points on the surrogate scan pair is less than a predetermined threshold, then those two points are accepted as a pair of surrogate corresponding points. The rotation and translation values between the surrogate scan pair are determined by using a set of surrogate corresponding points. Using the same rotation and translation values the original scan pairs are aligned. The resulting registration (or alignment) error is computed to check the accuracy of the scan alignment. When the registration error becomes acceptably small, the algorithm is terminated. Otherwise the above process is continued with cubes of smaller and smaller sizes until the algorithm is terminated. However at each finer resolution the search space for finding the surrogate corresponding points is restricted to the regions in the neighborhood of the surrogate points that were at found at the preceding coarser level. The surrogate corresponding points, as the resolution becomes finer and finer, converge to the true corresponding points on the original scans. This approach offers three main benefits: it improves the chances of finding the true corresponding points on the scans, minimize the adverse effects of noise in the scans, and reduce the computational load for finding the corresponding points.

  16. Spectral mapping tools from the earth sciences applied to spectral microscopy data.

    PubMed

    Harris, A Thomas

    2006-08-01

    Spectral imaging, originating from the field of earth remote sensing, is a powerful tool that is being increasingly used in a wide variety of applications for material identification. Several workers have used techniques like linear spectral unmixing (LSU) to discriminate materials in images derived from spectral microscopy. However, many spectral analysis algorithms rely on assumptions that are often violated in microscopy applications. This study explores algorithms originally developed as improvements on early earth imaging techniques that can be easily translated for use with spectral microscopy. To best demonstrate the application of earth remote sensing spectral analysis tools to spectral microscopy data, earth imaging software was used to analyze data acquired with a Leica confocal microscope with mechanical spectral scanning. For this study, spectral training signatures (often referred to as endmembers) were selected with the ENVI (ITT Visual Information Solutions, Boulder, CO) "spectral hourglass" processing flow, a series of tools that use the spectrally over-determined nature of hyperspectral data to find the most spectrally pure (or spectrally unique) pixels within the data set. This set of endmember signatures was then used in the full range of mapping algorithms available in ENVI to determine locations, and in some cases subpixel abundances of endmembers. Mapping and abundance images showed a broad agreement between the spectral analysis algorithms, supported through visual assessment of output classification images and through statistical analysis of the distribution of pixels within each endmember class. The powerful spectral analysis algorithms available in COTS software, the result of decades of research in earth imaging, are easily translated to new sources of spectral data. Although the scale between earth imagery and spectral microscopy is radically different, the problem is the same: mapping material locations and abundances based on unique spectral signatures. (c) 2006 International Society for Analytical Cytology.

  17. Middle school students' reading comprehension of mathematical texts and algebraic equations

    NASA Astrophysics Data System (ADS)

    Duru, Adem; Koklu, Onder

    2011-06-01

    In this study, middle school students' abilities to translate mathematical texts into algebraic representations and vice versa were investigated. In addition, students' difficulties in making such translations and the potential sources for these difficulties were also explored. Both qualitative and quantitative methods were used to collect data for this study: questionnaire and clinical interviews. The questionnaire consisted of two general types of items: (1) selected-response (multiple-choice) items for which the respondent selects from multiple options and (2) open-ended items for which the respondent constructs a response. In order to further investigate the students' strategies while they were translating the given mathematical texts to algebraic equations and vice versa, five randomly chosen (n = 5) students were interviewed. Data were collected in the 2007-2008 school year from 185 middle-school students in five teachers' classrooms in three different schools in the city of Adıyaman, Turkey. After the analysis of data, it was found that students who participated in this study had difficulties in translating the mathematical texts into algebraic equations by using symbols. It was also observed that these students had difficulties in translating the symbolic representations into mathematical texts because of their weak reading comprehension. In addition, finding of this research revealed that students' difficulties in translating the given mathematical texts into symbolic representations or vice versa come from different sources.

  18. An algorithm for calculating exam quality as a basis for performance-based allocation of funds at medical schools.

    PubMed

    Kirschstein, Timo; Wolters, Alexander; Lenz, Jan-Hendrik; Fröhlich, Susanne; Hakenberg, Oliver; Kundt, Günther; Darmüntzel, Martin; Hecker, Michael; Altiner, Attila; Müller-Hilke, Brigitte

    2016-01-01

    The amendment of the Medical Licensing Act (ÄAppO) in Germany in 2002 led to the introduction of graded assessments in the clinical part of medical studies. This, in turn, lent new weight to the importance of written tests, even though the minimum requirements for exam quality are sometimes difficult to reach. Introducing exam quality as a criterion for the award of performance-based allocation of funds is expected to steer the attention of faculty members towards more quality and perpetuate higher standards. However, at present there is a lack of suitable algorithms for calculating exam quality. In the spring of 2014, the students' dean commissioned the "core group" for curricular improvement at the University Medical Center in Rostock to revise the criteria for the allocation of performance-based funds for teaching. In a first approach, we developed an algorithm that was based on the results of the most common type of exam in medical education, multiple choice tests. It included item difficulty and discrimination, reliability as well as the distribution of grades achieved. This algorithm quantitatively describes exam quality of multiple choice exams. However, it can also be applied to exams involving short assay questions and the OSCE. It thus allows for the quantitation of exam quality in the various subjects and - in analogy to impact factors and third party grants - a ranking among faculty. Our algorithm can be applied to all test formats in which item difficulty, the discriminatory power of the individual items, reliability of the exam and the distribution of grades are measured. Even though the content validity of an exam is not considered here, we believe that our algorithm is suitable as a general basis for performance-based allocation of funds.

  19. PubMed Central

    PANATTO, D.; ARATA, L.; BEVILACQUA, I.; APPRATO, L.; GASPARINI, R.; AMICIZIA, D.

    2015-01-01

    Summary Introduction. Health-related knowledge is often assessed through multiple-choice tests. Among the different types of formats, researchers may opt to use multiple-mark items, i.e. with more than one correct answer. Although multiple-mark items have long been used in the academic setting – sometimes with scant or inconclusive results – little is known about the implementation of this format in research on in-field health education and promotion. Methods. A study population of secondary school students completed a survey on nutrition-related knowledge, followed by a single- lecture intervention. Answers were scored by means of eight different scoring algorithms and analyzed from the perspective of classical test theory. The same survey was re-administered to a sample of the students in order to evaluate the short-term change in their knowledge. Results. In all, 286 questionnaires were analyzed. Partial scoring algorithms displayed better psychometric characteristics than the dichotomous rule. In particular, the algorithm proposed by Ripkey and the balanced rule showed greater internal consistency and relative efficiency in scoring multiple-mark items. A penalizing algorithm in which the proportion of marked distracters was subtracted from that of marked correct answers was the only one that highlighted a significant difference in performance between natives and immigrants, probably owing to its slightly better discriminatory ability. This algorithm was also associated with the largest effect size in the pre-/post-intervention score change. Discussion. The choice of an appropriate rule for scoring multiple- mark items in research on health education and promotion should consider not only the psychometric properties of single algorithms but also the study aims and outcomes, since scoring rules differ in terms of biasness, reliability, difficulty, sensitivity to guessing and discrimination. PMID:26900331

  20. Emergency ultrasound-based algorithms for diagnosing blunt abdominal trauma.

    PubMed

    Stengel, Dirk; Bauwens, Kai; Rademacher, Grit; Ekkernkamp, Axel; Güthoff, Claas

    2013-07-31

    Ultrasonography is regarded as the tool of choice for early diagnostic investigations in patients with suspected blunt abdominal trauma. Although its sensitivity is too low for definite exclusion of abdominal organ injury, proponents of ultrasound argue that ultrasound-based clinical pathways enhance the speed of primary trauma assessment, reduce the number of computed tomography scans and cut costs. To assess the effects of trauma algorithms that include ultrasound examinations in patients with suspected blunt abdominal trauma. We searched the Cochrane Injuries Group's Specialised Register, CENTRAL (The Cochrane Library), MEDLINE (OvidSP), EMBASE (OvidSP), CINAHL (EBSCO), publishers' databases, controlled trials registers and the Internet. Bibliographies of identified articles and conference abstracts were searched for further elligible studies. Trial authors were contacted for further information and individual patient data. The searches were updated in February 2013. randomised controlled trials (RCTs) and quasi-randomised trials (qRCTs). patients with blunt torso, abdominal or multiple trauma undergoing diagnostic investigations for abdominal organ injury. diagnostic algorithms comprising emergency ultrasonography (US). diagnostic algorithms without ultrasound examinations (for example, primary computed tomography [CT] or diagnostic peritoneal lavage [DPL]). mortality, use of CT and DPL, cost-effectiveness, laparotomy and negative laparotomy rates, delayed diagnoses, and quality of life. Two authors independently selected trials for inclusion, assessed methodological quality and extracted data. Where possible, data were pooled and relative risks (RRs), risk differences (RDs) and weighted mean differences, each with 95% confidence intervals (CIs), were calculated by fixed- or random-effects modelling, as appropriate. We identified four studies meeting our inclusion criteria. Overall, trials were of moderate methodological quality. Few trial authors responded to our written inquiries seeking to resolve controversial issues and to obtain individual patient data. We pooled mortality data from three trials involving 1254 patients; relative risk in favour of the US arm was 1.00 (95% CI 0.50 to 2.00). US-based pathways significantly reduced the number of CT scans (random-effects RD -0.52, 95% CI -0.83 to -0.21), but the meaning of this result is unclear. Given the low sensitivity of ultrasound, the reduction in CT scans may either translate to a number needed to treat or number needed to harm of two. There is currently insufficient evidence from RCTs to justify promotion of ultrasound-based clinical pathways in diagnosing patients with suspected blunt abdominal trauma.

  1. Time lagged ordinal partition networks for capturing dynamics of continuous dynamical systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McCullough, Michael; Iu, Herbert Ho-Ching; Small, Michael

    2015-05-15

    We investigate a generalised version of the recently proposed ordinal partition time series to network transformation algorithm. First, we introduce a fixed time lag for the elements of each partition that is selected using techniques from traditional time delay embedding. The resulting partitions define regions in the embedding phase space that are mapped to nodes in the network space. Edges are allocated between nodes based on temporal succession thus creating a Markov chain representation of the time series. We then apply this new transformation algorithm to time series generated by the Rössler system and find that periodic dynamics translate tomore » ring structures whereas chaotic time series translate to band or tube-like structures—thereby indicating that our algorithm generates networks whose structure is sensitive to system dynamics. Furthermore, we demonstrate that simple network measures including the mean out degree and variance of out degrees can track changes in the dynamical behaviour in a manner comparable to the largest Lyapunov exponent. We also apply the same analysis to experimental time series generated by a diode resonator circuit and show that the network size, mean shortest path length, and network diameter are highly sensitive to the interior crisis captured in this particular data set.« less

  2. Correcting for possible tissue distortion between provocation and assessment in skin testing: the divergent beam UVB photo-test.

    PubMed

    O'Doherty, Jim; Henricson, Joakim; Falk, Magnus; Anderson, Chris D

    2013-11-01

    In tissue viability imaging (TiVi), an assessment method for skin erythema, correct orientation of skin position from provocation to assessment optimizes data interpretation. Image processing algorithms could compensate for the effects of skin translation, torsion and rotation realigning assessment images to the position of the skin at provocation. A reference image of a divergent, UVB phototest was acquired, as well as test images at varying levels of translation, rotation and torsion. Using 12 skin markers, an algorithm was applied to restore the distorted test images to the reference image. The algorithm corrected torsion and rotation up to approximately 35 degrees. The radius of the erythemal reaction and average value of the input image closely matched that of the reference image's 'true value'. The image 'de-warping' procedure improves the robustness of the response image evaluation in a clinical research setting and opens the possibility of the correction of possibly flawed images performed away from the laboratory setting by the subject/patient themselves. This opportunity may increase the use of photo-testing and, by extension, other late response skin testing where the necessity of a return assessment visit is a disincentive to performance of the test. © 2013 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  3. An improved finger-vein recognition algorithm based on template matching

    NASA Astrophysics Data System (ADS)

    Liu, Yueyue; Di, Si; Jin, Jian; Huang, Daoping

    2016-10-01

    Finger-vein recognition has became the most popular biometric identify methods. The investigation on the recognition algorithms always is the key point in this field. So far, there are many applicable algorithms have been developed. However, there are still some problems in practice, such as the variance of the finger position which may lead to the image distortion and shifting; during the identification process, some matching parameters determined according to experience may also reduce the adaptability of algorithm. Focus on above mentioned problems, this paper proposes an improved finger-vein recognition algorithm based on template matching. In order to enhance the robustness of the algorithm for the image distortion, the least squares error method is adopted to correct the oblique finger. During the feature extraction, local adaptive threshold method is adopted. As regard as the matching scores, we optimized the translation preferences as well as matching distance between the input images and register images on the basis of Naoto Miura algorithm. Experimental results indicate that the proposed method can improve the robustness effectively under the finger shifting and rotation conditions.

  4. The rhetoric of informed choice: perspectives from midwives on intrapartum fetal heart rate monitoring

    PubMed Central

    Hindley, Carol; Thomson, Ann M.

    2005-01-01

    Abstract Objective  To investigate midwives’ attitudes, values and beliefs on the use of intrapartum fetal monitoring. Design  Qualitative, semi‐structured interviews Subjects and setting  Fifty‐eight registered midwives in two hospitals in the North of England. Results  In this paper two main themes are discussed, these are: informed choice, and the power of the midwife. Midwives favoured the application of informed choice and shared a unanimous consensus on the definition. However, the idealistic perception of informed choice, which included contemporary notions of empowerment and autonomy for women expressing an informed choice, was not reportedly translated into practice. Midwives had to implement informed choice on intrapartum fetal monitoring within a competing set of health service agendas, i.e. medically driven protocols and a political climate of actively managed childbearing. This resulted in the manipulation of information during the midwives’ interactions with women. This ultimately meant that the women often got the choice the midwives wanted them to have. Conclusions  The information that a midwife imparts may consciously or subconsciously affect the woman's uptake and understanding of information. Therefore, the midwife has a powerful role to play in balancing the benefits and risk ratios applicable to fetal heart rate monitoring. However, a deeply ingrained pre‐occupation with technological methods of intrapartum fetal monitoring over many years has made it difficult for midwives to offer alternative forms of monitoring. This has placed limits on the facilitation of informed choice and autonomous decision making for women. PMID:16266418

  5. Performance improvement of multi-class detection using greedy algorithm for Viola-Jones cascade selection

    NASA Astrophysics Data System (ADS)

    Tereshin, Alexander A.; Usilin, Sergey A.; Arlazarov, Vladimir V.

    2018-04-01

    This paper aims to study the problem of multi-class object detection in video stream with Viola-Jones cascades. An adaptive algorithm for selecting Viola-Jones cascade based on greedy choice strategy in solution of the N-armed bandit problem is proposed. The efficiency of the algorithm on the problem of detection and recognition of the bank card logos in the video stream is shown. The proposed algorithm can be effectively used in documents localization and identification, recognition of road scene elements, localization and tracking of the lengthy objects , and for solving other problems of rigid object detection in a heterogeneous data flows. The computational efficiency of the algorithm makes it possible to use it both on personal computers and on mobile devices based on processors with low power consumption.

  6. Implementation of a Space Communications Cognitive Engine

    NASA Technical Reports Server (NTRS)

    Hackett, Timothy M.; Bilen, Sven G.; Ferreira, Paulo Victor R.; Wyglinski, Alexander M.; Reinhart, Richard C.

    2017-01-01

    Although communications-based cognitive engines have been proposed, very few have been implemented in a full system, especially in a space communications system. In this paper, we detail the implementation of a multi-objective reinforcement-learning algorithm and deep artificial neural networks for the use as a radio-resource-allocation controller. The modular software architecture presented encourages re-use and easy modification for trying different algorithms. Various trade studies involved with the system implementation and integration are discussed. These include the choice of software libraries that provide platform flexibility and promote reusability, choices regarding the deployment of this cognitive engine within a system architecture using the DVB-S2 standard and commercial hardware, and constraints placed on the cognitive engine caused by real-world radio constraints. The implemented radio-resource allocation-management controller was then integrated with the larger spaceground system developed by NASA Glenn Research Center (GRC).

  7. Efficient spares matrix multiplication scheme for the CYBER 203

    NASA Technical Reports Server (NTRS)

    Lambiotte, J. J., Jr.

    1984-01-01

    This work has been directed toward the development of an efficient algorithm for performing this computation on the CYBER-203. The desire to provide software which gives the user the choice between the often conflicting goals of minimizing central processing (CPU) time or storage requirements has led to a diagonal-based algorithm in which one of three types of storage is selected for each diagonal. For each storage type, an initialization sub-routine estimates the CPU and storage requirements based upon results from previously performed numerical experimentation. These requirements are adjusted by weights provided by the user which reflect the relative importance the user places on the resources. The three storage types employed were chosen to be efficient on the CYBER-203 for diagonals which are sparse, moderately sparse, or dense; however, for many densities, no diagonal type is most efficient with respect to both resource requirements. The user-supplied weights dictate the choice.

  8. An algorithm of adaptive scale object tracking in occlusion

    NASA Astrophysics Data System (ADS)

    Zhao, Congmei

    2017-05-01

    Although the correlation filter-based trackers achieve the competitive results both on accuracy and robustness, there are still some problems in handling scale variations, object occlusion, fast motions and so on. In this paper, a multi-scale kernel correlation filter algorithm based on random fern detector was proposed. The tracking task was decomposed into the target scale estimation and the translation estimation. At the same time, the Color Names features and HOG features were fused in response level to further improve the overall tracking performance of the algorithm. In addition, an online random fern classifier was trained to re-obtain the target after the target was lost. By comparing with some algorithms such as KCF, DSST, TLD, MIL, CT and CSK, experimental results show that the proposed approach could estimate the object state accurately and handle the object occlusion effectively.

  9. Hyperopt: a Python library for model selection and hyperparameter optimization

    NASA Astrophysics Data System (ADS)

    Bergstra, James; Komer, Brent; Eliasmith, Chris; Yamins, Dan; Cox, David D.

    2015-01-01

    Sequential model-based optimization (also known as Bayesian optimization) is one of the most efficient methods (per function evaluation) of function minimization. This efficiency makes it appropriate for optimizing the hyperparameters of machine learning algorithms that are slow to train. The Hyperopt library provides algorithms and parallelization infrastructure for performing hyperparameter optimization (model selection) in Python. This paper presents an introductory tutorial on the usage of the Hyperopt library, including the description of search spaces, minimization (in serial and parallel), and the analysis of the results collected in the course of minimization. This paper also gives an overview of Hyperopt-Sklearn, a software project that provides automatic algorithm configuration of the Scikit-learn machine learning library. Following Auto-Weka, we take the view that the choice of classifier and even the choice of preprocessing module can be taken together to represent a single large hyperparameter optimization problem. We use Hyperopt to define a search space that encompasses many standard components (e.g. SVM, RF, KNN, PCA, TFIDF) and common patterns of composing them together. We demonstrate, using search algorithms in Hyperopt and standard benchmarking data sets (MNIST, 20-newsgroups, convex shapes), that searching this space is practical and effective. In particular, we improve on best-known scores for the model space for both MNIST and convex shapes. The paper closes with some discussion of ongoing and future work.

  10. Delay-based virtual congestion control in multi-tenant datacenters

    NASA Astrophysics Data System (ADS)

    Liu, Yuxin; Zhu, Danhong; Zhang, Dong

    2018-03-01

    With the evolution of cloud computing and virtualization, the congestion control of virtual datacenters has become the basic issue for multi-tenant datacenters transmission. Regarding to the friendly conflict of heterogeneous congestion control among multi-tenant, this paper proposes a delay-based virtual congestion control, which translates the multi-tenant heterogeneous congestion control into delay-based feedback uniformly by setting the hypervisor translation layer, modifying three-way handshake of explicit feedback and packet loss feedback and throttling receive window. The simulation results show that the delay-based virtual congestion control can effectively solve the unfairness of heterogeneous feedback congestion control algorithms.

  11. A decoupled recursive approach for constrained flexible multibody system dynamics

    NASA Technical Reports Server (NTRS)

    Lai, Hao-Jan; Kim, Sung-Soo; Haug, Edward J.; Bae, Dae-Sung

    1989-01-01

    A variational-vector calculus approach is employed to derive a recursive formulation for dynamic analysis of flexible multibody systems. Kinematic relationships for adjacent flexible bodies are derived in a companion paper, using a state vector notation that represents translational and rotational components simultaneously. Cartesian generalized coordinates are assigned for all body and joint reference frames, to explicitly formulate deformation kinematics under small deformation kinematics and an efficient flexible dynamics recursive algorithm is developed. Dynamic analysis of a closed loop robot is performed to illustrate efficiency of the algorithm.

  12. Predicting Sepsis Risk Using the "Sniffer" Algorithm in the Electronic Medical Record.

    PubMed

    Olenick, Evelyn M; Zimbro, Kathie S; DʼLima, Gabrielle M; Ver Schneider, Patricia; Jones, Danielle

    The Sepsis "Sniffer" Algorithm (SSA) has merit as a digital sepsis alert but should be considered an adjunct to versus an alternative for the Nurse Screening Tool (NST), given lower specificity and positive predictive value. The SSA reduced the risk of incorrectly categorizing patients at low risk for sepsis, detected sepsis high risk in half the time, and reduced redundant NST screens by 70% and manual screening hours by 64% to 72%. Preserving nurse hours expended on manual sepsis alerts may translate into time directed toward other patient priorities.

  13. A critical appraisal of experimental intracerebral hemorrhage research

    PubMed Central

    MacLellan, Crystal L; Paquette, Rosalie; Colbourne, Frederick

    2012-01-01

    The likelihood of translating therapeutic interventions for stroke rests on the quality of preclinical science. Given the limited success of putative treatments for ischemic stroke and the reasons put forth to explain it, we sought to determine whether such problems hamper progress for intracerebral hemorrhage (ICH). Approximately 10% to 20% of strokes result from an ICH, which results in considerable disability and high mortality. Several animal models reproduce ICH and its underlying pathophysiology, and these models have been widely used to evaluate treatments. As yet, however, none has successfully translated. In this review, we focus on rodent models of ICH, highlighting differences among them (e.g., pathophysiology), issues with experimental design and analysis, and choice of end points. A Pub Med search for experimental ICH (years: 2007 to 31 July 2011) found 121 papers. Of these, 84% tested neuroprotectants, 11% tested stem cell therapies, and 5% tested rehabilitation therapies. We reviewed these to examine study quality (e.g., use of blinding procedures) and choice of end points (e.g., behavioral testing). Not surprisingly, the problems that have plagued the ischemia field are also prevalent in ICH literature. Based on these data, several recommendations are put forth to facilitate progress in identifying effective treatments for ICH. PMID:22293989

  14. Sensory and motoric influences on attention dynamics during standing balance recovery in young and older adults.

    PubMed

    Redfern, Mark S; Chambers, April J; Jennings, J Richard; Furman, Joseph M

    2017-08-01

    This study investigated the impact of attention on the sensory and motor actions during postural recovery from underfoot perturbations in young and older adults. A dual-task paradigm was used involving disjunctive and choice reaction time (RT) tasks to auditory and visual stimuli at different delays from the onset of two types of platform perturbations (rotations and translations). The RTs were increased prior to the perturbation (preparation phase) and during the immediate recovery response (response initiation) in young and older adults, but this interference dissipated rapidly after the perturbation response was initiated (<220 ms). The sensory modality of the RT task impacted the results with interference being greater for the auditory task compared to the visual task. As motor complexity of the RT task increased (disjunctive versus choice) there was greater interference from the perturbation. Finally, increasing the complexity of the postural perturbation by mixing the rotational and translational perturbations together increased interference for the auditory RT tasks, but did not affect the visual RT responses. These results suggest that sensory and motoric components of postural control are under the influence of different dynamic attentional processes.

  15. caTIES: a grid based system for coding and retrieval of surgical pathology reports and tissue specimens in support of translational research.

    PubMed

    Crowley, Rebecca S; Castine, Melissa; Mitchell, Kevin; Chavan, Girish; McSherry, Tara; Feldman, Michael

    2010-01-01

    The authors report on the development of the Cancer Tissue Information Extraction System (caTIES)--an application that supports collaborative tissue banking and text mining by leveraging existing natural language processing methods and algorithms, grid communication and security frameworks, and query visualization methods. The system fills an important need for text-derived clinical data in translational research such as tissue-banking and clinical trials. The design of caTIES addresses three critical issues for informatics support of translational research: (1) federation of research data sources derived from clinical systems; (2) expressive graphical interfaces for concept-based text mining; and (3) regulatory and security model for supporting multi-center collaborative research. Implementation of the system at several Cancer Centers across the country is creating a potential network of caTIES repositories that could provide millions of de-identified clinical reports to users. The system provides an end-to-end application of medical natural language processing to support multi-institutional translational research programs.

  16. Enhancing Quality Interventions Promoting Healthy Sexuality (EQUIPS): A Novel Application of Translational Research Methods

    PubMed Central

    Acosta, Joie; Ebener, Patricia; Driver, Jennifer; Keith, Jamie; Peebles, Dana

    2013-01-01

    Abstract Translational research is expanding, in part, because Evidence‐Based Programs or Practices (EBPs) are not adopted in many medical domains. However, little translational research exists on EBPs that are prevention programs delivered in nonclinical, community‐based settings. These organizations often have low capacity, which undermines implementation quality and outcomes. Rigorous translational research is needed in these settings so within a single study, capacity, implementation quality, and outcomes are measured and links between them tested. This paper overviews the study Enhancing Quality Interventions Promoting Healthy Sexuality (EQUIPS), which tests how well a community‐based setting (Boys & Girls Clubs) conducts an EBP called Making Proud Choices that aims to prevent teen pregnancy and sexually transmitted infections, with and without an implementation support intervention called Getting To Outcomes. The study design is novel as it assesses: Getting To Outcomes’ impact on capacity, implementation quality, and outcomes simultaneously and in both study conditions; will assess sustainability by measuring capacity and fidelity a year after the Getting To Outcomes support ends; and will operate on a large scale similar to many national initiatives. Many studies have not incorporated all these elements and thus EQUIPS could serve as a model for translational research in many domains. PMID:23751031

  17. Multi-class geospatial object detection based on a position-sensitive balancing framework for high spatial resolution remote sensing imagery

    NASA Astrophysics Data System (ADS)

    Zhong, Yanfei; Han, Xiaobing; Zhang, Liangpei

    2018-04-01

    Multi-class geospatial object detection from high spatial resolution (HSR) remote sensing imagery is attracting increasing attention in a wide range of object-related civil and engineering applications. However, the distribution of objects in HSR remote sensing imagery is location-variable and complicated, and how to accurately detect the objects in HSR remote sensing imagery is a critical problem. Due to the powerful feature extraction and representation capability of deep learning, the deep learning based region proposal generation and object detection integrated framework has greatly promoted the performance of multi-class geospatial object detection for HSR remote sensing imagery. However, due to the translation caused by the convolution operation in the convolutional neural network (CNN), although the performance of the classification stage is seldom influenced, the localization accuracies of the predicted bounding boxes in the detection stage are easily influenced. The dilemma between translation-invariance in the classification stage and translation-variance in the object detection stage has not been addressed for HSR remote sensing imagery, and causes position accuracy problems for multi-class geospatial object detection with region proposal generation and object detection. In order to further improve the performance of the region proposal generation and object detection integrated framework for HSR remote sensing imagery object detection, a position-sensitive balancing (PSB) framework is proposed in this paper for multi-class geospatial object detection from HSR remote sensing imagery. The proposed PSB framework takes full advantage of the fully convolutional network (FCN), on the basis of a residual network, and adopts the PSB framework to solve the dilemma between translation-invariance in the classification stage and translation-variance in the object detection stage. In addition, a pre-training mechanism is utilized to accelerate the training procedure and increase the robustness of the proposed algorithm. The proposed algorithm is validated with a publicly available 10-class object detection dataset.

  18. A fast two-plus-one phase-shifting algorithm for high-speed three-dimensional shape measurement system

    NASA Astrophysics Data System (ADS)

    Wang, Wenyun; Guo, Yingfu

    2008-12-01

    Phase-shifting methods for 3-D shape measurement have long been employed in optical metrology for their speed and accuracy. For real-time, accurate, 3-D shape measurement, a four-step phase-shifting algorithm which has the advantage of its symmetry is a good choice; however, its measurement error is sensitive to any fringe image errors caused by various sources such as motion blur. To alleviate this problem, a fast two-plus-one phase-shifting algorithm is proposed in this paper. This kind of technology will benefit many applications such as medical imaging, gaming, animation, computer vision, computer graphics, etc.

  19. A Specification for a Godunov-type Eulerian 2-D Hydrocode, Revision 0

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nystrom, William D; Robey, Jonathan M

    2012-05-01

    The purpose of this code specification is to describe an algorithm for solving the Euler equations of hydrodynamics in a 2D rectangular region in sufficient detail to allow a software developer to produce an implementation on their target platform using their programming language of choice without requiring detailed knowledge and experience in the field of computational fluid dynamics. It should be possible for a software developer who is proficient in the programming language of choice and is knowledgable of the target hardware to produce an efficient implementation of this specification if they also possess a thorough working knowledge of parallelmore » programming and have some experience in scientific programming using fields and meshes. On modern architectures, it will be important to focus on issues related to the exploitation of the fine grain parallelism and data locality present in this algorithm. This specification aims to make that task easier by presenting the essential details of the algorithm in a systematic and language neutral manner while also avoiding the inclusion of implementation details that would likely be specific to a particular type of programming paradigm or platform architecture.« less

  20. Estimation of non-solid lung nodule volume with low-dose CT protocols: effect of reconstruction algorithm and measurement method

    NASA Astrophysics Data System (ADS)

    Gavrielides, Marios A.; DeFilippo, Gino; Berman, Benjamin P.; Li, Qin; Petrick, Nicholas; Schultz, Kurt; Siegelman, Jenifer

    2017-03-01

    Computed tomography is primarily the modality of choice to assess stability of nonsolid pulmonary nodules (sometimes referred to as ground-glass opacity) for three or more years, with change in size being the primary factor to monitor. Since volume extracted from CT is being examined as a quantitative biomarker of lung nodule size, it is important to examine factors affecting the performance of volumetric CT for this task. More specifically, the effect of reconstruction algorithms and measurement method in the context of low-dose CT protocols has been an under-examined area of research. In this phantom study we assessed volumetric CT with two different measurement methods (model-based and segmentation-based) for nodules with radiodensities of both nonsolid (-800HU and -630HU) and solid (-10HU) nodules, sizes of 5mm and 10mm, and two different shapes (spherical and spiculated). Imaging protocols included CTDIvol typical of screening (1.7mGy) and sub-screening (0.6mGy) scans and different types of reconstruction algorithms across three scanners. Results showed that radio-density was the factor contributing most to overall error based on ANOVA. The choice of reconstruction algorithm or measurement method did not affect substantially the accuracy of measurements; however, measurement method affected repeatability with repeatability coefficients ranging from around 3-5% for the model-based estimator to around 20-30% across reconstruction algorithms for the segmentation-based method. The findings of the study can be valuable toward developing standardized protocols and performance claims for nonsolid nodules.

  1. On the reduction of 4d $$ \\mathcal{N}=1 $$ theories on $$ {\\mathbb{S}}^2 $$

    DOE PAGES

    Gadde, Abhijit; Razamat, Shlomo S.; Willett, Brian

    2015-11-24

    Here, we discuss reductions of generalmore » $$ \\mathcal{N}=1 $$ four dimensional gauge theories on $$ {\\mathbb{S}}^2 $$. The effective two dimensional theory one obtains depends on the details of the coupling of the theory to background fields, which can be translated to a choice of R-symmetry. We argue that, for special choices of R-symmetry, the resulting two dimensional theory has a natural interpretation as an $$ \\mathcal{N}(0,2) $$ gauge theory. As an application of our general observations, we discuss reductions of $$ \\mathcal{N}=1 $$ and $$ \\mathcal{N}=2 $$ dualities and argue that they imply certain two dimensional dualities.« less

  2. Two formalisms, one renormalized stress-energy tensor

    NASA Astrophysics Data System (ADS)

    Barceló, C.; Carballo, R.; Garay, L. J.

    2012-04-01

    We explicitly compare the structure of the renormalized stress-energy tensor of a massless scalar field in a (1+1) curved spacetime as obtained by two different strategies: normal-mode construction of the field operator and one-loop effective action. We pay special attention to where and how the information related to the choice of vacuum state in both formalisms is encoded. By establishing a clear translation map between both procedures, we show that these two potentially different renormalized stress-energy tensors are actually equal, when using vacuum-state choices related by this map. One specific aim of the analysis is to facilitate the comparison of results regarding semiclassical effects in gravitational collapse as obtained within these different formalisms.

  3. Pathological choice: the neuroscience of gambling and gambling addiction.

    PubMed

    Clark, Luke; Averbeck, Bruno; Payer, Doris; Sescousse, Guillaume; Winstanley, Catharine A; Xue, Gui

    2013-11-06

    Gambling is pertinent to neuroscience research for at least two reasons. First, gambling is a naturalistic and pervasive example of risky decision making, and thus gambling games can provide a paradigm for the investigation of human choice behavior and "irrationality." Second, excessive gambling involvement (i.e., pathological gambling) is currently conceptualized as a behavioral addiction, and research on this condition may provide insights into addictive mechanisms in the absence of exogenous drug effects. This article is a summary of topics covered in a Society for Neuroscience minisymposium, focusing on recent advances in understanding the neural basis of gambling behavior, including translational findings in rodents and nonhuman primates, which have begun to delineate neural circuitry and neurochemistry involved.

  4. My family made me do it: the influence of family therapists' families of origin on their occupational choice.

    PubMed

    Goldklank, S

    1986-06-01

    This study is an empirical test and exploration of the folklore about family life correlates of family therapists' occupational choice. The folklore is translated into systems concepts, including role complementarity and the mutually determining effect of process and roles. Fifty-nine family therapists, 49 siblings of the therapists, and 51 undifferentiated, non-helping professionals were compared on FACES (29), The Complementary Role Questionnaire, and on demographic data. Inconsistencies in the results led to a critique of the clinical faithfulness of current systems measures. Family therapists did not differ on FACES, but did differ in aspects of roles from their siblings and from the control professionals.

  5. Long-range analysis of density fitting in extended systems

    NASA Astrophysics Data System (ADS)

    Varga, Scarontefan

    Density fitting scheme is analyzed for the Coulomb problem in extended systems from the correctness of long-range behavior point of view. We show that for the correct cancellation of divergent long-range Coulomb terms it is crucial for the density fitting scheme to reproduce the overlap matrix exactly. It is demonstrated that from all possible fitting metric choices the Coulomb metric is the only one which inherently preserves the overlap matrix for infinite systems with translational periodicity. Moreover, we show that by a small additional effort any non-Coulomb metric fit can be made overlap-preserving as well. The problem is analyzed for both ordinary and Poisson basis set choices.

  6. TranslatomeDB: a comprehensive database and cloud-based analysis platform for translatome sequencing data

    PubMed Central

    Liu, Wanting; Xiang, Lunping; Zheng, Tingkai; Jin, Jingjie

    2018-01-01

    Abstract Translation is a key regulatory step, linking transcriptome and proteome. Two major methods of translatome investigations are RNC-seq (sequencing of translating mRNA) and Ribo-seq (ribosome profiling). To facilitate the investigation of translation, we built a comprehensive database TranslatomeDB (http://www.translatomedb.net/) which provides collection and integrated analysis of published and user-generated translatome sequencing data. The current version includes 2453 Ribo-seq, 10 RNC-seq and their 1394 corresponding mRNA-seq datasets in 13 species. The database emphasizes the analysis functions in addition to the dataset collections. Differential gene expression (DGE) analysis can be performed between any two datasets of same species and type, both on transcriptome and translatome levels. The translation indices translation ratios, elongation velocity index and translational efficiency can be calculated to quantitatively evaluate translational initiation efficiency and elongation velocity, respectively. All datasets were analyzed using a unified, robust, accurate and experimentally-verifiable pipeline based on the FANSe3 mapping algorithm and edgeR for DGE analyzes. TranslatomeDB also allows users to upload their own datasets and utilize the identical unified pipeline to analyze their data. We believe that our TranslatomeDB is a comprehensive platform and knowledgebase on translatome and proteome research, releasing the biologists from complex searching, analyzing and comparing huge sequencing data without needing local computational power. PMID:29106630

  7. Exploratory High-Fidelity Aerostructural Optimization Using an Efficient Monolithic Solution Method

    NASA Astrophysics Data System (ADS)

    Zhang, Jenmy Zimi

    This thesis is motivated by the desire to discover fuel efficient aircraft concepts through exploratory design. An optimization methodology based on tightly integrated high-fidelity aerostructural analysis is proposed, which has the flexibility, robustness, and efficiency to contribute to this goal. The present aerostructural optimization methodology uses an integrated geometry parameterization and mesh movement strategy, which was initially proposed for aerodynamic shape optimization. This integrated approach provides the optimizer with a large amount of geometric freedom for conducting exploratory design, while allowing for efficient and robust mesh movement in the presence of substantial shape changes. In extending this approach to aerostructural optimization, this thesis has addressed a number of important challenges. A structural mesh deformation strategy has been introduced to translate consistently the shape changes described by the geometry parameterization to the structural model. A three-field formulation of the discrete steady aerostructural residual couples the mesh movement equations with the three-dimensional Euler equations and a linear structural analysis. Gradients needed for optimization are computed with a three-field coupled adjoint approach. A number of investigations have been conducted to demonstrate the suitability and accuracy of the present methodology for use in aerostructural optimization involving substantial shape changes. Robustness and efficiency in the coupled solution algorithms is crucial to the success of an exploratory optimization. This thesis therefore also focuses on the design of an effective monolithic solution algorithm for the proposed methodology. This involves using a Newton-Krylov method for the aerostructural analysis and a preconditioned Krylov subspace method for the coupled adjoint solution. Several aspects of the monolithic solution method have been investigated. These include appropriate strategies for scaling and matrix-vector product evaluation, as well as block preconditioning techniques that preserve the modularity between subproblems. The monolithic solution method is applied to problems with varying degrees of fluid-structural coupling, as well as a wing span optimization study. The monolithic solution algorithm typically requires 20%-70% less computing time than its partitioned counterpart. This advantage increases with increasing wing flexibility. The performance of the monolithic solution method is also much less sensitive to the choice of the solution parameter.

  8. On Some Separated Algorithms for Separable Nonlinear Least Squares Problems.

    PubMed

    Gan, Min; Chen, C L Philip; Chen, Guang-Yong; Chen, Long

    2017-10-03

    For a class of nonlinear least squares problems, it is usually very beneficial to separate the variables into a linear and a nonlinear part and take full advantage of reliable linear least squares techniques. Consequently, the original problem is turned into a reduced problem which involves only nonlinear parameters. We consider in this paper four separated algorithms for such problems. The first one is the variable projection (VP) algorithm with full Jacobian matrix of Golub and Pereyra. The second and third ones are VP algorithms with simplified Jacobian matrices proposed by Kaufman and Ruano et al. respectively. The fourth one only uses the gradient of the reduced problem. Monte Carlo experiments are conducted to compare the performance of these four algorithms. From the results of the experiments, we find that: 1) the simplified Jacobian proposed by Ruano et al. is not a good choice for the VP algorithm; moreover, it may render the algorithm hard to converge; 2) the fourth algorithm perform moderately among these four algorithms; 3) the VP algorithm with the full Jacobian matrix perform more stable than that of the VP algorithm with Kuafman's simplified one; and 4) the combination of VP algorithm and Levenberg-Marquardt method is more effective than the combination of VP algorithm and Gauss-Newton method.

  9. A study on the performance comparison of metaheuristic algorithms on the learning of neural networks

    NASA Astrophysics Data System (ADS)

    Lai, Kee Huong; Zainuddin, Zarita; Ong, Pauline

    2017-08-01

    The learning or training process of neural networks entails the task of finding the most optimal set of parameters, which includes translation vectors, dilation parameter, synaptic weights, and bias terms. Apart from the traditional gradient descent-based methods, metaheuristic methods can also be used for this learning purpose. Since the inception of genetic algorithm half a century ago, the last decade witnessed the explosion of a variety of novel metaheuristic algorithms, such as harmony search algorithm, bat algorithm, and whale optimization algorithm. Despite the proof of the no free lunch theorem in the discipline of optimization, a survey in the literature of machine learning gives contrasting results. Some researchers report that certain metaheuristic algorithms are superior to the others, whereas some others argue that different metaheuristic algorithms give comparable performance. As such, this paper aims to investigate if a certain metaheuristic algorithm will outperform the other algorithms. In this work, three metaheuristic algorithms, namely genetic algorithms, particle swarm optimization, and harmony search algorithm are considered. The algorithms are incorporated in the learning of neural networks and their classification results on the benchmark UCI machine learning data sets are compared. It is found that all three metaheuristic algorithms give similar and comparable performance, as captured in the average overall classification accuracy. The results corroborate the findings reported in the works done by previous researchers. Several recommendations are given, which include the need of statistical analysis to verify the results and further theoretical works to support the obtained empirical results.

  10. Validity and reliability of CHOICE Health Experience Questionnaire: Thai version.

    PubMed

    Aiyasanon, Nipa; Premasathian, Nalinee; Nimmannit, Akarin; Jetanavanich, Pantip; Sritippayawan, Suchai

    2009-09-01

    Assess the reliability and validity of the Thai translation of the CHOICE Health Experience Questionnaire (CHEQ), which is the English-language questionnaire, developed specifically for End-stage-renal disease (ESRD) patients. The CHEQ comprised of two parts, nine general domains of SF-36 (physical function, role-physical, bodily pain, mental health, role-emotional, social function, vitality, general health, and report transition) and 16 dialysis specific domains of the CHEQ (role-physical, mental health, general health, freedom, travel restriction, cognitive function, financial function, restriction diet and fluids, recreation, work, body image, symptoms, sex, sleep, access, and quality of life). The authors translated the CHEQ questionnaire into Thai and confirmed the accuracy by back translation. Pilot study sample was 10 Thai ESRD patients. Then the CHEQ (Thai) was applied to 110 Thai ESRD patients. Twenty-three patients had chronic peritoneal dialysis patients and 87 were chronic intermittent hemodialysis patients. Statistical analysis included descriptive statistics, Mann-Whitney U test, Student's t-test, and Cronbach's alpha. Construct validity was satisfactory with the significant difference less than 0.001 between the low and high group. The reliability coefficient for the Cronbach's alpha of the total scale of the CHEQ (Thai) was 0.98. The Cronbach 's alphas were greater than 0.7 for all domains, range from 0.58 to 0.92, except the social function and quality of life domain (alpha = 0.66 and 0.575). The CHEQ (Thai) is reliable and valid for assessment of Thai ESRD patients receiving chronic dialysis. Its properties are similar to those reported in the original version.

  11. Prescription of oral hypoglycemic agents for patients with type 2 diabetes mellitus: A retrospective cohort study using a Japanese hospital database.

    PubMed

    Tanabe, Makito; Motonaga, Ryoko; Terawaki, Yuichi; Nomiyama, Takashi; Yanase, Toshihiko

    2017-03-01

    In treatment algorithms of type 2 diabetes mellitus in Western countries, biguanides are recommended as first-line agents. In Japan, various oral hypoglycemic agents (OHAs) are available, but prescription patterns are unclear. Data of 7,108 and 2,655 type 2 diabetes mellitus patients in study 1 and study 2, respectively, were extracted from the Medical Data Vision database (2008-2013). Cardiovascular disease history was not considered in study 1, but was in study 2. Initial choice of OHA, adherence to its use, effect on glycated hemoglobin levels for 2 years and the second choice of OHA were investigated. In study 1, α-glucosidase inhibitor, glinide and thiazolidinedione were preferentially medicated in relatively lower glycated hemoglobin cases compared with other OHAs. The two most prevalent first prescriptions of OHAs were biguanides and dipeptidyl peptidase-4 inhibitors, and the greatest adherence was for α-glucosidase inhibitors. In patients treated continuously with a single OHA for 2 years, improvement in glycated hemoglobin levels was greatest for dipeptidyl peptidase-4 inhibitors. As a second OHA added to the first OHA during the first 2 years, dipeptidyl peptidase-4 inhibitors were chosen most often, especially if a biguanide was the first OHA. In study 2, targeting patients with a cardiovascular disease history, a similar tendency to study 1 was observed in the first choice of OHA, adherence and the second choice of OHA. Even in Japanese type 2 diabetes mellitus patients, a Western algorithm seems to be respected to some degree. The OHA choice does not seem to be affected by a cardiovascular disease history. © 2016 The Authors. Journal of Diabetes Investigation published by Asian Association for the Study of Diabetes (AASD) and John Wiley & Sons Australia, Ltd.

  12. Translational-circular scanning for magneto-acoustic tomography with current injection.

    PubMed

    Wang, Shigang; Ma, Ren; Zhang, Shunqi; Yin, Tao; Liu, Zhipeng

    2016-01-27

    Magneto-acoustic tomography with current injection involves using electrical impedance imaging technology. To explore the potential applications in imaging biological tissue and enhance image quality, a new scan mode for the transducer is proposed that is based on translational and circular scanning to record acoustic signals from sources. An imaging algorithm to analyze these signals is developed in respect to this alternative scanning scheme. Numerical simulations and physical experiments were conducted to evaluate the effectiveness of this scheme. An experiment using a graphite sheet as a tissue-mimicking phantom medium was conducted to verify simulation results. A pulsed voltage signal was applied across the sample, and acoustic signals were recorded as the transducer performed stepped translational or circular scans. The imaging algorithm was used to obtain an acoustic-source image based on the signals. In simulations, the acoustic-source image is correlated with the conductivity at the sample boundaries of the sample, but image results change depending on distance and angular aspect of the transducer. In general, as angle and distance decreases, the image quality improves. Moreover, experimental data confirmed the correlation. The acoustic-source images resulting from the alternative scanning mode has yielded the outline of a phantom medium. This scan mode enables improvements to be made in the sensitivity of the detecting unit and a change to a transducer array that would improve the efficiency and accuracy of acoustic-source images.

  13. Equity, autonomy, and efficiency: what health care system should we have?

    PubMed

    Menzel, Paul T

    1992-02-01

    The U.S. has a wide range of options in choosing a health care system. Rational choice of a system depends on analysis and prioritization of the basic moral goals of equitable access to all citizens, the just sharing of financial costs between well and ill, respect for the values and choices of subscribers and patients, and efficiency in the delivery of costworthy care. These moral goals themselves, however, tell us little about what health care system the United States should have. Equitable access does not demand a level and scope of care for the poor equal to that rationally chosen by the middle class, and there are ways within mixed systems, though not easy ways, to achieve a fair distribution of costs between well and ill. Despite pluralistic systems' apparent advantage in allowing subscribers to choose their own forms of rationing, problems in translating serious long-term subscriber choices into actual medical practice may be greater in pluralistic than in unitary systems. Final choice of a system hinges primarily on peculiar historical facts about U.S. political culture, not on moral principle.

  14. Equality, autonomy, and efficiency: what health care system should we have?

    PubMed

    Menzel, P T

    1992-02-01

    The U.S. has a wide range of options in choosing a health care system. Rational choice of a system depends on analysis and prioritization of the basis moral goals of equitable access to all citizens, the just sharing of financial costs between well and ill, respect for the values and choices of subscribers and patients, and efficiency in the delivery of costworthy care. These moral goals themselves, however, tell us little about what health care system the United States should have. Equitable access does not demand a level and scope of care for the poor equal to that rationally chosen by the middle class, and there are ways within mixed systems, though not easy ways, to achieve a fair distribution of costs between well and ill. Despite pluralistic systems' apparent advantage in allowing subscribers to choose their own forms of rationing, problems in translating serious long-term subscriber choices into actual medical practice may be greater in pluralistic than in unitary systems. Final choice of a system hinges primarily on peculiar historical facts about U.S. political culture, not on moral principle.

  15. Incremental principal component pursuit for video background modeling

    DOEpatents

    Rodriquez-Valderrama, Paul A.; Wohlberg, Brendt

    2017-03-14

    An incremental Principal Component Pursuit (PCP) algorithm for video background modeling that is able to process one frame at a time while adapting to changes in background, with a computational complexity that allows for real-time processing, having a low memory footprint and is robust to translational and rotational jitter.

  16. Optimal design of minimum mean-square error noise reduction algorithms using the simulated annealing technique.

    PubMed

    Bai, Mingsian R; Hsieh, Ping-Ju; Hur, Kur-Nan

    2009-02-01

    The performance of the minimum mean-square error noise reduction (MMSE-NR) algorithm in conjunction with time-recursive averaging (TRA) for noise estimation is found to be very sensitive to the choice of two recursion parameters. To address this problem in a more systematic manner, this paper proposes an optimization method to efficiently search the optimal parameters of the MMSE-TRA-NR algorithms. The objective function is based on a regression model, whereas the optimization process is carried out with the simulated annealing algorithm that is well suited for problems with many local optima. Another NR algorithm proposed in the paper employs linear prediction coding as a preprocessor for extracting the correlated portion of human speech. Objective and subjective tests were undertaken to compare the optimized MMSE-TRA-NR algorithm with several conventional NR algorithms. The results of subjective tests were processed by using analysis of variance to justify the statistic significance. A post hoc test, Tukey's Honestly Significant Difference, was conducted to further assess the pairwise difference between the NR algorithms.

  17. Development and validation of an algorithm for laser application in wound treatment 1

    PubMed Central

    da Cunha, Diequison Rite; Salomé, Geraldo Magela; Massahud, Marcelo Renato; Mendes, Bruno; Ferreira, Lydia Masako

    2017-01-01

    ABSTRACT Objective: To develop and validate an algorithm for laser wound therapy. Method: Methodological study and literature review. For the development of the algorithm, a review was performed in the Health Sciences databases of the past ten years. The algorithm evaluation was performed by 24 participants, nurses, physiotherapists, and physicians. For data analysis, the Cronbach’s alpha coefficient and the chi-square test for independence was used. The level of significance of the statistical test was established at 5% (p<0.05). Results: The professionals’ responses regarding the facility to read the algorithm indicated: 41.70%, great; 41.70%, good; 16.70%, regular. With regard the algorithm being sufficient for supporting decisions related to wound evaluation and wound cleaning, 87.5% said yes to both questions. Regarding the participants’ opinion that the algorithm contained enough information to support their decision regarding the choice of laser parameters, 91.7% said yes. The questionnaire presented reliability using the Cronbach’s alpha coefficient test (α = 0.962). Conclusion: The developed and validated algorithm showed reliability for evaluation, wound cleaning, and use of laser therapy in wounds. PMID:29211197

  18. Institutional shared resources and translational cancer research.

    PubMed

    De Paoli, Paolo

    2009-06-29

    The development and maintenance of adequate shared infrastructures is considered a major goal for academic centers promoting translational research programs. Among infrastructures favoring translational research, centralized facilities characterized by shared, multidisciplinary use of expensive laboratory instrumentation, or by complex computer hardware and software and/or by high professional skills are necessary to maintain or improve institutional scientific competitiveness. The success or failure of a shared resource program also depends on the choice of appropriate institutional policies and requires an effective institutional governance regarding decisions on staffing, existence and composition of advisory committees, policies and of defined mechanisms of reporting, budgeting and financial support of each resource. Shared Resources represent a widely diffused model to sustain cancer research; in fact, web sites from an impressive number of research Institutes and Universities in the U.S. contain pages dedicated to the SR that have been established in each Center, making a complete view of the situation impossible. However, a nation-wide overview of how Cancer Centers develop SR programs is available on the web site for NCI-designated Cancer Centers in the U.S., while in Europe, information is available for individual Cancer centers. This article will briefly summarize the institutional policies, the organizational needs, the characteristics, scientific aims, and future developments of SRs necessary to develop effective translational research programs in oncology.In fact, the physical build-up of SRs per se is not sufficient for the successful translation of biomedical research. Appropriate policies to improve the academic culture in collaboration, the availability of educational programs for translational investigators, the existence of administrative facilitations for translational research and an efficient organization supporting clinical trial recruitment and management represent essential tools, providing solutions to overcome existing barriers in the development of translational research in biomedical research centers.

  19. Institutional shared resources and translational cancer research

    PubMed Central

    De Paoli, Paolo

    2009-01-01

    The development and maintenance of adequate shared infrastructures is considered a major goal for academic centers promoting translational research programs. Among infrastructures favoring translational research, centralized facilities characterized by shared, multidisciplinary use of expensive laboratory instrumentation, or by complex computer hardware and software and/or by high professional skills are necessary to maintain or improve institutional scientific competitiveness. The success or failure of a shared resource program also depends on the choice of appropriate institutional policies and requires an effective institutional governance regarding decisions on staffing, existence and composition of advisory committees, policies and of defined mechanisms of reporting, budgeting and financial support of each resource. Shared Resources represent a widely diffused model to sustain cancer research; in fact, web sites from an impressive number of research Institutes and Universities in the U.S. contain pages dedicated to the SR that have been established in each Center, making a complete view of the situation impossible. However, a nation-wide overview of how Cancer Centers develop SR programs is available on the web site for NCI-designated Cancer Centers in the U.S., while in Europe, information is available for individual Cancer centers. This article will briefly summarize the institutional policies, the organizational needs, the characteristics, scientific aims, and future developments of SRs necessary to develop effective translational research programs in oncology. In fact, the physical build-up of SRs per se is not sufficient for the successful translation of biomedical research. Appropriate policies to improve the academic culture in collaboration, the availability of educational programs for translational investigators, the existence of administrative facilitations for translational research and an efficient organization supporting clinical trial recruitment and management represent essential tools, providing solutions to overcome existing barriers in the development of translational research in biomedical research centers. PMID:19563639

  20. Three-Dimensional ISAR Imaging Method for High-Speed Targets in Short-Range Using Impulse Radar Based on SIMO Array.

    PubMed

    Zhou, Xinpeng; Wei, Guohua; Wu, Siliang; Wang, Dawei

    2016-03-11

    This paper proposes a three-dimensional inverse synthetic aperture radar (ISAR) imaging method for high-speed targets in short-range using an impulse radar. According to the requirements for high-speed target measurement in short-range, this paper establishes the single-input multiple-output (SIMO) antenna array, and further proposes a missile motion parameter estimation method based on impulse radar. By analyzing the motion geometry relationship of the warhead scattering center after translational compensation, this paper derives the receiving antenna position and the time delay after translational compensation, and thus overcomes the shortcomings of conventional translational compensation methods. By analyzing the motion characteristics of the missile, this paper estimates the missile's rotation angle and the rotation matrix by establishing a new coordinate system. Simulation results validate the performance of the proposed algorithm.

  1. An introduction to quantum machine learning

    NASA Astrophysics Data System (ADS)

    Schuld, Maria; Sinayskiy, Ilya; Petruccione, Francesco

    2015-04-01

    Machine learning algorithms learn a desired input-output relation from examples in order to interpret new inputs. This is important for tasks such as image and speech recognition or strategy optimisation, with growing applications in the IT industry. In the last couple of years, researchers investigated if quantum computing can help to improve classical machine learning algorithms. Ideas range from running computationally costly algorithms or their subroutines efficiently on a quantum computer to the translation of stochastic methods into the language of quantum theory. This contribution gives a systematic overview of the emerging field of quantum machine learning. It presents the approaches as well as technical details in an accessible way, and discusses the potential of a future theory of quantum learning.

  2. Classification of posture maintenance data with fuzzy clustering algorithms

    NASA Technical Reports Server (NTRS)

    Bezdek, James C.

    1992-01-01

    Sensory inputs from the visual, vestibular, and proprioreceptive systems are integrated by the central nervous system to maintain postural equilibrium. Sustained exposure to microgravity causes neurosensory adaptation during spaceflight, which results in decreased postural stability until readaptation occurs upon return to the terrestrial environment. Data which simulate sensory inputs under various sensory organization test (SOT) conditions were collected in conjunction with Johnson Space Center postural control studies using a tilt-translation device (TTD). The University of West Florida applied the fuzzy c-meams (FCM) clustering algorithms to this data with a view towards identifying various states and stages of subjects experiencing such changes. Feature analysis, time step analysis, pooling data, response of the subjects, and the algorithms used are discussed.

  3. Quantum gates with controlled adiabatic evolutions

    NASA Astrophysics Data System (ADS)

    Hen, Itay

    2015-02-01

    We introduce a class of quantum adiabatic evolutions that we claim may be interpreted as the equivalents of the unitary gates of the quantum gate model. We argue that these gates form a universal set and may therefore be used as building blocks in the construction of arbitrary "adiabatic circuits," analogously to the manner in which gates are used in the circuit model. One implication of the above construction is that arbitrary classical boolean circuits as well as gate model circuits may be directly translated to adiabatic algorithms with no additional resources or complexities. We show that while these adiabatic algorithms fail to exhibit certain aspects of the inherent fault tolerance of traditional quantum adiabatic algorithms, they may have certain other experimental advantages acting as quantum gates.

  4. Health psychology and translational genomic research: bringing innovation to cancer-related behavioral interventions.

    PubMed

    McBride, Colleen M; Birmingham, Wendy C; Kinney, Anita Y

    2015-01-01

    The past decade has witnessed rapid advances in human genome sequencing technology and in the understanding of the role of genetic and epigenetic alterations in cancer development. These advances have raised hopes that such knowledge could lead to improvements in behavioral risk reduction interventions, tailored screening recommendations, and treatment matching that together could accelerate the war on cancer. Despite this optimism, translation of genomic discovery for clinical and public health applications has moved relatively slowly. To date, health psychologists and the behavioral sciences generally have played a very limited role in translation research. In this report we discuss what we mean by genomic translational research and consider the social forces that have slowed translational research, including normative assumptions that translation research must occur downstream of basic science, thus relegating health psychology and other behavioral sciences to a distal role. We then outline two broad priority areas in cancer prevention, detection, and treatment where evidence will be needed to guide evaluation and implementation of personalized genomics: (a) effective communication, to broaden dissemination of genomic discovery, including patient-provider communication and familial communication, and (b) the need to improve the motivational impact of behavior change interventions, including those aimed at altering lifestyle choices and those focusing on decision making regarding targeted cancer treatments and chemopreventive adherence. We further discuss the role that health psychologists can play in interdisciplinary teams to shape translational research priorities and to evaluate the utility of emerging genomic discoveries for cancer prevention and control. PsycINFO Database Record (c) 2015 APA, all rights reserved.

  5. Options for Parallelizing a Planning and Scheduling Algorithm

    NASA Technical Reports Server (NTRS)

    Clement, Bradley J.; Estlin, Tara A.; Bornstein, Benjamin D.

    2011-01-01

    Space missions have a growing interest in putting multi-core processors onboard spacecraft. For many missions processing power significantly slows operations. We investigate how continual planning and scheduling algorithms can exploit multi-core processing and outline different potential design decisions for a parallelized planning architecture. This organization of choices and challenges helps us with an initial design for parallelizing the CASPER planning system for a mesh multi-core processor. This work extends that presented at another workshop with some preliminary results.

  6. A computer program for the localization of small areas in roentgenological images

    NASA Technical Reports Server (NTRS)

    Keller, R. A.; Baily, N. A.

    1976-01-01

    A method and associated algorithm are presented which allow a simple and accurate determination to be made of the location of small symmetric areas presented in roentgenological images. The method utilizes an operator to visually spot object positions but eliminates the need for critical positioning accuracy on the operator's part. The rapidity of measurement allows results to be evaluated on-line. Parameters associated with the algorithm have been analyzed, and methods to facilitate an optimum choice for any particular experimental setup are presented.

  7. A portable approach for PIC on emerging architectures

    NASA Astrophysics Data System (ADS)

    Decyk, Viktor

    2016-03-01

    A portable approach for designing Particle-in-Cell (PIC) algorithms on emerging exascale computers, is based on the recognition that 3 distinct programming paradigms are needed. They are: low level vector (SIMD) processing, middle level shared memory parallel programing, and high level distributed memory programming. In addition, there is a memory hierarchy associated with each level. Such algorithms can be initially developed using vectorizing compilers, OpenMP, and MPI. This is the approach recommended by Intel for the Phi processor. These algorithms can then be translated and possibly specialized to other programming models and languages, as needed. For example, the vector processing and shared memory programming might be done with CUDA instead of vectorizing compilers and OpenMP, but generally the algorithm itself is not greatly changed. The UCLA PICKSC web site at http://www.idre.ucla.edu/ contains example open source skeleton codes (mini-apps) illustrating each of these three programming models, individually and in combination. Fortran2003 now supports abstract data types, and design patterns can be used to support a variety of implementations within the same code base. Fortran2003 also supports interoperability with C so that implementations in C languages are also easy to use. Finally, main codes can be translated into dynamic environments such as Python, while still taking advantage of high performing compiled languages. Parallel languages are still evolving with interesting developments in co-Array Fortran, UPC, and OpenACC, among others, and these can also be supported within the same software architecture. Work supported by NSF and DOE Grants.

  8. Sensory prediction on a whiskered robot: a tactile analogy to “optical flow”

    PubMed Central

    Schroeder, Christopher L.; Hartmann, Mitra J. Z.

    2012-01-01

    When an animal moves an array of sensors (e.g., the hand, the eye) through the environment, spatial and temporal gradients of sensory data are related by the velocity of the moving sensory array. In vision, the relationship between spatial and temporal brightness gradients is quantified in the “optical flow” equation. In the present work, we suggest an analog to optical flow for the rodent vibrissal (whisker) array, in which the perceptual intensity that “flows” over the array is bending moment. Changes in bending moment are directly related to radial object distance, defined as the distance between the base of a whisker and the point of contact with the object. Using both simulations and a 1×5 array (row) of artificial whiskers, we demonstrate that local object curvature can be estimated based on differences in radial distance across the array. We then develop two algorithms, both based on tactile flow, to predict the future contact points that will be obtained as the whisker array translates along the object. The translation of the robotic whisker array represents the rat's head velocity. The first algorithm uses a calculation of the local object slope, while the second uses a calculation of the local object curvature. Both algorithms successfully predict future contact points for simple surfaces. The algorithm based on curvature was found to more accurately predict future contact points as surfaces became more irregular. We quantify the inter-related effects of whisker spacing and the object's spatial frequencies, and examine the issues that arise in the presence of real-world noise, friction, and slip. PMID:23097641

  9. Sensory prediction on a whiskered robot: a tactile analogy to "optical flow".

    PubMed

    Schroeder, Christopher L; Hartmann, Mitra J Z

    2012-01-01

    When an animal moves an array of sensors (e.g., the hand, the eye) through the environment, spatial and temporal gradients of sensory data are related by the velocity of the moving sensory array. In vision, the relationship between spatial and temporal brightness gradients is quantified in the "optical flow" equation. In the present work, we suggest an analog to optical flow for the rodent vibrissal (whisker) array, in which the perceptual intensity that "flows" over the array is bending moment. Changes in bending moment are directly related to radial object distance, defined as the distance between the base of a whisker and the point of contact with the object. Using both simulations and a 1×5 array (row) of artificial whiskers, we demonstrate that local object curvature can be estimated based on differences in radial distance across the array. We then develop two algorithms, both based on tactile flow, to predict the future contact points that will be obtained as the whisker array translates along the object. The translation of the robotic whisker array represents the rat's head velocity. The first algorithm uses a calculation of the local object slope, while the second uses a calculation of the local object curvature. Both algorithms successfully predict future contact points for simple surfaces. The algorithm based on curvature was found to more accurately predict future contact points as surfaces became more irregular. We quantify the inter-related effects of whisker spacing and the object's spatial frequencies, and examine the issues that arise in the presence of real-world noise, friction, and slip.

  10. Registration of 3D spectral OCT volumes combining ICP with a graph-based approach

    NASA Astrophysics Data System (ADS)

    Niemeijer, Meindert; Lee, Kyungmoo; Garvin, Mona K.; Abràmoff, Michael D.; Sonka, Milan

    2012-02-01

    The introduction of spectral Optical Coherence Tomography (OCT) scanners has enabled acquisition of high resolution, 3D cross-sectional volumetric images of the retina. 3D-OCT is used to detect and manage eye diseases such as glaucoma and age-related macular degeneration. To follow-up patients over time, image registration is a vital tool to enable more precise, quantitative comparison of disease states. In this work we present a 3D registrationmethod based on a two-step approach. In the first step we register both scans in the XY domain using an Iterative Closest Point (ICP) based algorithm. This algorithm is applied to vessel segmentations obtained from the projection image of each scan. The distance minimized in the ICP algorithm includes measurements of the vessel orientation and vessel width to allow for a more robust match. In the second step, a graph-based method is applied to find the optimal translation along the depth axis of the individual A-scans in the volume to match both scans. The cost image used to construct the graph is based on the mean squared error (MSE) between matching A-scans in both images at different translations. We have applied this method to the registration of Optic Nerve Head (ONH) centered 3D-OCT scans of the same patient. First, 10 3D-OCT scans of 5 eyes with glaucoma imaged in vivo were registered for a qualitative evaluation of the algorithm performance. Then, 17 OCT data set pairs of 17 eyes with known deformation were used for quantitative assessment of the method's robustness.

  11. Optimally stopped variational quantum algorithms

    NASA Astrophysics Data System (ADS)

    Vinci, Walter; Shabani, Alireza

    2018-04-01

    Quantum processors promise a paradigm shift in high-performance computing which needs to be assessed by accurate benchmarking measures. In this article, we introduce a benchmark for the variational quantum algorithm (VQA), recently proposed as a heuristic algorithm for small-scale quantum processors. In VQA, a classical optimization algorithm guides the processor's quantum dynamics to yield the best solution for a given problem. A complete assessment of the scalability and competitiveness of VQA should take into account both the quality and the time of dynamics optimization. The method of optimal stopping, employed here, provides such an assessment by explicitly including time as a cost factor. Here, we showcase this measure for benchmarking VQA as a solver for some quadratic unconstrained binary optimization. Moreover, we show that a better choice for the cost function of the classical routine can significantly improve the performance of the VQA algorithm and even improve its scaling properties.

  12. A comparison of select image-compression algorithms for an electronic still camera

    NASA Technical Reports Server (NTRS)

    Nerheim, Rosalee

    1989-01-01

    This effort is a study of image-compression algorithms for an electronic still camera. An electronic still camera can record and transmit high-quality images without the use of film, because images are stored digitally in computer memory. However, high-resolution images contain an enormous amount of information, and will strain the camera's data-storage system. Image compression will allow more images to be stored in the camera's memory. For the electronic still camera, a compression algorithm that produces a reconstructed image of high fidelity is most important. Efficiency of the algorithm is the second priority. High fidelity and efficiency are more important than a high compression ratio. Several algorithms were chosen for this study and judged on fidelity, efficiency and compression ratio. The transform method appears to be the best choice. At present, the method is compressing images to a ratio of 5.3:1 and producing high-fidelity reconstructed images.

  13. Multigroup Monte Carlo on GPUs: Comparison of history- and event-based algorithms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hamilton, Steven P.; Slattery, Stuart R.; Evans, Thomas M.

    This article presents an investigation of the performance of different multigroup Monte Carlo transport algorithms on GPUs with a discussion of both history-based and event-based approaches. Several algorithmic improvements are introduced for both approaches. By modifying the history-based algorithm that is traditionally favored in CPU-based MC codes to occasionally filter out dead particles to reduce thread divergence, performance exceeds that of either the pure history-based or event-based approaches. The impacts of several algorithmic choices are discussed, including performance studies on Kepler and Pascal generation NVIDIA GPUs for fixed source and eigenvalue calculations. Single-device performance equivalent to 20–40 CPU cores onmore » the K40 GPU and 60–80 CPU cores on the P100 GPU is achieved. Last, in addition, nearly perfect multi-device parallel weak scaling is demonstrated on more than 16,000 nodes of the Titan supercomputer.« less

  14. False-nearest-neighbors algorithm and noise-corrupted time series

    NASA Astrophysics Data System (ADS)

    Rhodes, Carl; Morari, Manfred

    1997-05-01

    The false-nearest-neighbors (FNN) algorithm was originally developed to determine the embedding dimension for autonomous time series. For noise-free computer-generated time series, the algorithm does a good job in predicting the embedding dimension. However, the problem of predicting the embedding dimension when the time-series data are corrupted by noise was not fully examined in the original studies of the FNN algorithm. Here it is shown that with large data sets, even small amounts of noise can lead to incorrect prediction of the embedding dimension. Surprisingly, as the length of the time series analyzed by FNN grows larger, the cause of incorrect prediction becomes more pronounced. An analysis of the effect of noise on the FNN algorithm and a solution for dealing with the effects of noise are given here. Some results on the theoretically correct choice of the FNN threshold are also presented.

  15. Multigroup Monte Carlo on GPUs: Comparison of history- and event-based algorithms

    DOE PAGES

    Hamilton, Steven P.; Slattery, Stuart R.; Evans, Thomas M.

    2017-12-22

    This article presents an investigation of the performance of different multigroup Monte Carlo transport algorithms on GPUs with a discussion of both history-based and event-based approaches. Several algorithmic improvements are introduced for both approaches. By modifying the history-based algorithm that is traditionally favored in CPU-based MC codes to occasionally filter out dead particles to reduce thread divergence, performance exceeds that of either the pure history-based or event-based approaches. The impacts of several algorithmic choices are discussed, including performance studies on Kepler and Pascal generation NVIDIA GPUs for fixed source and eigenvalue calculations. Single-device performance equivalent to 20–40 CPU cores onmore » the K40 GPU and 60–80 CPU cores on the P100 GPU is achieved. Last, in addition, nearly perfect multi-device parallel weak scaling is demonstrated on more than 16,000 nodes of the Titan supercomputer.« less

  16. Sensitivity of Global Sea-Air CO2 Flux to Gas Transfer Algorithms, Climatological Wind Speeds, and Variability of Sea Surface Temperature and Salinity

    NASA Technical Reports Server (NTRS)

    McClain, Charles R.; Signorini, Sergio

    2002-01-01

    Sensitivity analyses of sea-air CO2 flux to gas transfer algorithms, climatological wind speeds, sea surface temperatures (SST) and salinity (SSS) were conducted for the global oceans and selected regional domains. Large uncertainties in the global sea-air flux estimates are identified due to different gas transfer algorithms, global climatological wind speeds, and seasonal SST and SSS data. The global sea-air flux ranges from -0.57 to -2.27 Gt/yr, depending on the combination of gas transfer algorithms and global climatological wind speeds used. Different combinations of SST and SSS global fields resulted in changes as large as 35% on the oceans global sea-air flux. An error as small as plus or minus 0.2 in SSS translates into a plus or minus 43% deviation on the mean global CO2 flux. This result emphasizes the need for highly accurate satellite SSS observations for the development of remote sensing sea-air flux algorithms.

  17. User's guide to the Fault Inferring Nonlinear Detection System (FINDS) computer program

    NASA Technical Reports Server (NTRS)

    Caglayan, A. K.; Godiwala, P. M.; Satz, H. S.

    1988-01-01

    Described are the operation and internal structure of the computer program FINDS (Fault Inferring Nonlinear Detection System). The FINDS algorithm is designed to provide reliable estimates for aircraft position, velocity, attitude, and horizontal winds to be used for guidance and control laws in the presence of possible failures in the avionics sensors. The FINDS algorithm was developed with the use of a digital simulation of a commercial transport aircraft and tested with flight recorded data. The algorithm was then modified to meet the size constraints and real-time execution requirements on a flight computer. For the real-time operation, a multi-rate implementation of the FINDS algorithm has been partitioned to execute on a dual parallel processor configuration: one based on the translational dynamics and the other on the rotational kinematics. The report presents an overview of the FINDS algorithm, the implemented equations, the flow charts for the key subprograms, the input and output files, program variable indexing convention, subprogram descriptions, and the common block descriptions used in the program.

  18. Phase Retrieval Using a Genetic Algorithm on the Systematic Image-Based Optical Alignment Testbed

    NASA Technical Reports Server (NTRS)

    Taylor, Jaime R.

    2003-01-01

    NASA s Marshall Space Flight Center s Systematic Image-Based Optical Alignment (SIBOA) Testbed was developed to test phase retrieval algorithms and hardware techniques. Individuals working with the facility developed the idea of implementing phase retrieval by breaking the determination of the tip/tilt of each mirror apart from the piston motion (or translation) of each mirror. Presented in this report is an algorithm that determines the optimal phase correction associated only with the piston motion of the mirrors. A description of the Phase Retrieval problem is first presented. The Systematic Image-Based Optical Alignment (SIBOA) Testbeb is then described. A Discrete Fourier Transform (DFT) is necessary to transfer the incoming wavefront (or estimate of phase error) into the spatial frequency domain to compare it with the image. A method for reducing the DFT to seven scalar/matrix multiplications is presented. A genetic algorithm is then used to search for the phase error. The results of this new algorithm on a test problem are presented.

  19. A Tensor Product Formulation of Strassen's Matrix Multiplication Algorithm with Memory Reduction

    DOE PAGES

    Kumar, B.; Huang, C. -H.; Sadayappan, P.; ...

    1995-01-01

    In this article, we present a program generation strategy of Strassen's matrix multiplication algorithm using a programming methodology based on tensor product formulas. In this methodology, block recursive programs such as the fast Fourier Transforms and Strassen's matrix multiplication algorithm are expressed as algebraic formulas involving tensor products and other matrix operations. Such formulas can be systematically translated to high-performance parallel/vector codes for various architectures. In this article, we present a nonrecursive implementation of Strassen's algorithm for shared memory vector processors such as the Cray Y-MP. A previous implementation of Strassen's algorithm synthesized from tensor product formulas required working storagemore » of size O(7 n ) for multiplying 2 n × 2 n matrices. We present a modified formulation in which the working storage requirement is reduced to O(4 n ). The modified formulation exhibits sufficient parallelism for efficient implementation on a shared memory multiprocessor. Performance results on a Cray Y-MP8/64 are presented.« less

  20. Basic properties of lattices of cubes, algorithms for their construction, and application capabilities in discrete optimization

    NASA Astrophysics Data System (ADS)

    Khachaturov, R. V.

    2015-01-01

    The basic properties of a new type of lattices—a lattice of cubes—are described. It is shown that, with a suitable choice of union and intersection operations, the set of all subcubes of an N-cube forms a lattice, which is called a lattice of cubes. Algorithms for constructing such lattices are described, and the results produced by these algorithms in the case of lattices of various dimensions are illustrated. It is proved that a lattice of cubes is a lattice with supplements, which makes it possible to minimize and maximize supermodular functions on it. Examples of such functions are given. The possibility of applying previously developed efficient optimization algorithms to the formulation and solution of new classes of problems on lattices of cubes.

  1. Lossless compression of image data products on th e FIFE CD-ROM series

    NASA Technical Reports Server (NTRS)

    Newcomer, Jeffrey A.; Strebel, Donald E.

    1993-01-01

    How do you store enough of the key data sets, from a total of 120 gigabytes of data collected for a scientific experiment, on a collection of CD-ROM's, small enough to distribute to a broad scientific community? In such an application where information loss in unacceptable, lossless compression algorithms are the only choice. Although lossy compression algorithms can provide an order of magnitude improvement in compression ratios over lossless algorithms the information that is lost is often part of the key scientific precision of the data. Therefore, lossless compression algorithms are and will continue to be extremely important in minimizing archiving storage requirements and distribution of large earth and space (ESS) data sets while preserving the essential scientific precision of the data.

  2. Algorithms for accelerated convergence of adaptive PCA.

    PubMed

    Chatterjee, C; Kang, Z; Roychowdhury, V P

    2000-01-01

    We derive and discuss new adaptive algorithms for principal component analysis (PCA) that are shown to converge faster than the traditional PCA algorithms due to Oja, Sanger, and Xu. It is well known that traditional PCA algorithms that are derived by using gradient descent on an objective function are slow to converge. Furthermore, the convergence of these algorithms depends on appropriate choices of the gain sequences. Since online applications demand faster convergence and an automatic selection of gains, we present new adaptive algorithms to solve these problems. We first present an unconstrained objective function, which can be minimized to obtain the principal components. We derive adaptive algorithms from this objective function by using: 1) gradient descent; 2) steepest descent; 3) conjugate direction; and 4) Newton-Raphson methods. Although gradient descent produces Xu's LMSER algorithm, the steepest descent, conjugate direction, and Newton-Raphson methods produce new adaptive algorithms for PCA. We also provide a discussion on the landscape of the objective function, and present a global convergence proof of the adaptive gradient descent PCA algorithm using stochastic approximation theory. Extensive experiments with stationary and nonstationary multidimensional Gaussian sequences show faster convergence of the new algorithms over the traditional gradient descent methods.We also compare the steepest descent adaptive algorithm with state-of-the-art methods on stationary and nonstationary sequences.

  3. Individual differences in migratory behavior shape population genetic structure and microhabitat choice in sympatric blackcaps (Sylvia atricapilla)

    PubMed Central

    Rolshausen, Gregor; Segelbacher, Gernot; Hermes, Claudia; Hobson, Keith A; Schaefer, H Martin

    2013-01-01

    In migratory birds, traits such as orientation and distance are known to have a strong genetic background, and they often exhibit considerable within-population variation. How this variation relates to evolutionary responses to ongoing selection is unknown because the underlying mechanisms that translate environmental changes into population genetic changes are unclear. We show that within-population genetic structure in southern German blackcaps (Sylvia atricapilla) is related to individual differences in migratory behavior. Our 3-year study revealed a positive correlation between individual migratory origins, denoted via isotope (δ2H) values, and genetic distances. Genetic diversity and admixture differed not only across a recently established migratory polymorphism with NW- and SW-migrating birds but also across δ2H clusters within the same migratory route. Our results suggest assortment based on individual migratory origins which would facilitate evolutionary responses. We scrutinized arrival times and microhabitat choice as potential mechanisms mediating between individual variation in migratory behavior and assortment. We found significant support that microhabitat choice, rather than timing of arrival, is associated with individual variation in migratory origins. Moreover, examining genetic diversity across the migratory divide, we found migrants following the NW route to be genetically more distinct from each other compared with migrants following the traditional SW route. Our study suggests that migratory behavior shapes population genetic structure in blackcaps not only across the migratory divide but also on an individual level independent of the divide. Thus, within-population variation in migratory behavior might play an important role in translating environmental change into genetic change. PMID:24324877

  4. The optimal algorithm for Multi-source RS image fusion.

    PubMed

    Fu, Wei; Huang, Shui-Guang; Li, Zeng-Shun; Shen, Hao; Li, Jun-Shuai; Wang, Peng-Yuan

    2016-01-01

    In order to solve the issue which the fusion rules cannot be self-adaptively adjusted by using available fusion methods according to the subsequent processing requirements of Remote Sensing (RS) image, this paper puts forward GSDA (genetic-iterative self-organizing data analysis algorithm) by integrating the merit of genetic arithmetic together with the advantage of iterative self-organizing data analysis algorithm for multi-source RS image fusion. The proposed algorithm considers the wavelet transform of the translation invariance as the model operator, also regards the contrast pyramid conversion as the observed operator. The algorithm then designs the objective function by taking use of the weighted sum of evaluation indices, and optimizes the objective function by employing GSDA so as to get a higher resolution of RS image. As discussed above, the bullet points of the text are summarized as follows.•The contribution proposes the iterative self-organizing data analysis algorithm for multi-source RS image fusion.•This article presents GSDA algorithm for the self-adaptively adjustment of the fusion rules.•This text comes up with the model operator and the observed operator as the fusion scheme of RS image based on GSDA. The proposed algorithm opens up a novel algorithmic pathway for multi-source RS image fusion by means of GSDA.

  5. Computational electromagnetics: the physics of smooth versus oscillatory fields.

    PubMed

    Chew, W C

    2004-03-15

    This paper starts by discussing the difference in the physics between solutions to Laplace's equation (static) and Maxwell's equations for dynamic problems (Helmholtz equation). Their differing physical characters are illustrated by how the two fields convey information away from their source point. The paper elucidates the fact that their differing physical characters affect the use of Laplacian field and Helmholtz field in imaging. They also affect the design of fast computational algorithms for electromagnetic scattering problems. Specifically, a comparison is made between fast algorithms developed using wavelets, the simple fast multipole method, and the multi-level fast multipole algorithm for electrodynamics. The impact of the physical characters of the dynamic field on the parallelization of the multi-level fast multipole algorithm is also discussed. The relationship of diagonalization of translators to group theory is presented. Finally, future areas of research for computational electromagnetics are described.

  6. Rotation of a synchronous viscoelastic shell

    NASA Astrophysics Data System (ADS)

    Noyelles, Benoît

    2018-03-01

    Several natural satellites of the giant planets have shown evidence of a global internal ocean, coated by a thin, icy crust. This crust is probably viscoelastic, which would alter its rotational response. This response would translate into several rotational quantities, i.e. the obliquity, and the librations at different frequencies, for which the crustal elasticity reacts differently. This study aims at modelling the global response of the viscoelastic crust. For that, I derive the time-dependence of the tensor of inertia, which I combine with the time evolution of the rotational quantities, thanks to an iterative algorithm. This algorithm combines numerical simulations of the rotation with a digital filtering of the resulting tensor of inertia. The algorithm works very well in the elastic case, provided the problem is not resonant. However, considering tidal dissipation adds different phase lags to the oscillating contributions, which challenge the convergence of the algorithm.

  7. Decentralized Feedback Controllers for Exponential Stabilization of Hybrid Periodic Orbits: Application to Robotic Walking.

    PubMed

    Hamed, Kaveh Akbari; Gregg, Robert D

    2016-07-01

    This paper presents a systematic algorithm to design time-invariant decentralized feedback controllers to exponentially stabilize periodic orbits for a class of hybrid dynamical systems arising from bipedal walking. The algorithm assumes a class of parameterized and nonlinear decentralized feedback controllers which coordinate lower-dimensional hybrid subsystems based on a common phasing variable. The exponential stabilization problem is translated into an iterative sequence of optimization problems involving bilinear and linear matrix inequalities, which can be easily solved with available software packages. A set of sufficient conditions for the convergence of the iterative algorithm to a stabilizing decentralized feedback control solution is presented. The power of the algorithm is demonstrated by designing a set of local nonlinear controllers that cooperatively produce stable walking for a 3D autonomous biped with 9 degrees of freedom, 3 degrees of underactuation, and a decentralization scheme motivated by amputee locomotion with a transpelvic prosthetic leg.

  8. Decentralized Feedback Controllers for Robust Stabilization of Periodic Orbits of Hybrid Systems: Application to Bipedal Walking.

    PubMed

    Hamed, Kaveh Akbari; Gregg, Robert D

    2017-07-01

    This paper presents a systematic algorithm to design time-invariant decentralized feedback controllers to exponentially and robustly stabilize periodic orbits for hybrid dynamical systems against possible uncertainties in discrete-time phases. The algorithm assumes a family of parameterized and decentralized nonlinear controllers to coordinate interconnected hybrid subsystems based on a common phasing variable. The exponential and [Formula: see text] robust stabilization problems of periodic orbits are translated into an iterative sequence of optimization problems involving bilinear and linear matrix inequalities. By investigating the properties of the Poincaré map, some sufficient conditions for the convergence of the iterative algorithm are presented. The power of the algorithm is finally demonstrated through designing a set of robust stabilizing local nonlinear controllers for walking of an underactuated 3D autonomous bipedal robot with 9 degrees of freedom, impact model uncertainties, and a decentralization scheme motivated by amputee locomotion with a transpelvic prosthetic leg.

  9. Mobile robot motion estimation using Hough transform

    NASA Astrophysics Data System (ADS)

    Aldoshkin, D. N.; Yamskikh, T. N.; Tsarev, R. Yu

    2018-05-01

    This paper proposes an algorithm for estimation of mobile robot motion. The geometry of surrounding space is described with range scans (samples of distance measurements) taken by the mobile robot’s range sensors. A similar sample of space geometry in any arbitrary preceding moment of time or the environment map can be used as a reference. The suggested algorithm is invariant to isotropic scaling of samples or map that allows using samples measured in different units and maps made at different scales. The algorithm is based on Hough transform: it maps from measurement space to a straight-line parameters space. In the straight-line parameters, space the problems of estimating rotation, scaling and translation are solved separately breaking down a problem of estimating mobile robot localization into three smaller independent problems. The specific feature of the algorithm presented is its robustness to noise and outliers inherited from Hough transform. The prototype of the system of mobile robot orientation is described.

  10. Decentralized Feedback Controllers for Exponential Stabilization of Hybrid Periodic Orbits: Application to Robotic Walking*

    PubMed Central

    Hamed, Kaveh Akbari; Gregg, Robert D.

    2016-01-01

    This paper presents a systematic algorithm to design time-invariant decentralized feedback controllers to exponentially stabilize periodic orbits for a class of hybrid dynamical systems arising from bipedal walking. The algorithm assumes a class of parameterized and nonlinear decentralized feedback controllers which coordinate lower-dimensional hybrid subsystems based on a common phasing variable. The exponential stabilization problem is translated into an iterative sequence of optimization problems involving bilinear and linear matrix inequalities, which can be easily solved with available software packages. A set of sufficient conditions for the convergence of the iterative algorithm to a stabilizing decentralized feedback control solution is presented. The power of the algorithm is demonstrated by designing a set of local nonlinear controllers that cooperatively produce stable walking for a 3D autonomous biped with 9 degrees of freedom, 3 degrees of underactuation, and a decentralization scheme motivated by amputee locomotion with a transpelvic prosthetic leg. PMID:27990059

  11. Decentralized Feedback Controllers for Robust Stabilization of Periodic Orbits of Hybrid Systems: Application to Bipedal Walking

    PubMed Central

    Hamed, Kaveh Akbari; Gregg, Robert D.

    2016-01-01

    This paper presents a systematic algorithm to design time-invariant decentralized feedback controllers to exponentially and robustly stabilize periodic orbits for hybrid dynamical systems against possible uncertainties in discrete-time phases. The algorithm assumes a family of parameterized and decentralized nonlinear controllers to coordinate interconnected hybrid subsystems based on a common phasing variable. The exponential and H2 robust stabilization problems of periodic orbits are translated into an iterative sequence of optimization problems involving bilinear and linear matrix inequalities. By investigating the properties of the Poincaré map, some sufficient conditions for the convergence of the iterative algorithm are presented. The power of the algorithm is finally demonstrated through designing a set of robust stabilizing local nonlinear controllers for walking of an underactuated 3D autonomous bipedal robot with 9 degrees of freedom, impact model uncertainties, and a decentralization scheme motivated by amputee locomotion with a transpelvic prosthetic leg. PMID:28959117

  12. Estimating Plasma Glucose from Interstitial Glucose: The Issue of Calibration Algorithms in Commercial Continuous Glucose Monitoring Devices

    PubMed Central

    Rossetti, Paolo; Bondia, Jorge; Vehí, Josep; Fanelli, Carmine G.

    2010-01-01

    Evaluation of metabolic control of diabetic people has been classically performed measuring glucose concentrations in blood samples. Due to the potential improvement it offers in diabetes care, continuous glucose monitoring (CGM) in the subcutaneous tissue is gaining popularity among both patients and physicians. However, devices for CGM measure glucose concentration in compartments other than blood, usually the interstitial space. This means that CGM need calibration against blood glucose values, and the accuracy of the estimation of blood glucose will also depend on the calibration algorithm. The complexity of the relationship between glucose dynamics in blood and the interstitial space, contrasts with the simplistic approach of calibration algorithms currently implemented in commercial CGM devices, translating in suboptimal accuracy. The present review will analyze the issue of calibration algorithms for CGM, focusing exclusively on the commercially available glucose sensors. PMID:22163505

  13. Fast prediction of RNA-RNA interaction using heuristic algorithm.

    PubMed

    Montaseri, Soheila

    2015-01-01

    Interaction between two RNA molecules plays a crucial role in many medical and biological processes such as gene expression regulation. In this process, an RNA molecule prohibits the translation of another RNA molecule by establishing stable interactions with it. Some algorithms have been formed to predict the structure of the RNA-RNA interaction. High computational time is a common challenge in most of the presented algorithms. In this context, a heuristic method is introduced to accurately predict the interaction between two RNAs based on minimum free energy (MFE). This algorithm uses a few dot matrices for finding the secondary structure of each RNA and binding sites between two RNAs. Furthermore, a parallel version of this method is presented. We describe the algorithm's concurrency and parallelism for a multicore chip. The proposed algorithm has been performed on some datasets including CopA-CopT, R1inv-R2inv, Tar-Tar*, DIS-DIS, and IncRNA54-RepZ in Escherichia coli bacteria. The method has high validity and efficiency, and it is run in low computational time in comparison to other approaches.

  14. Parallel algorithm for determining motion vectors in ice floe images by matching edge features

    NASA Technical Reports Server (NTRS)

    Manohar, M.; Ramapriyan, H. K.; Strong, J. P.

    1988-01-01

    A parallel algorithm is described to determine motion vectors of ice floes using time sequences of images of the Arctic ocean obtained from the Synthetic Aperture Radar (SAR) instrument flown on-board the SEASAT spacecraft. Researchers describe a parallel algorithm which is implemented on the MPP for locating corresponding objects based on their translationally and rotationally invariant features. The algorithm first approximates the edges in the images by polygons or sets of connected straight-line segments. Each such edge structure is then reduced to a seed point. Associated with each seed point are the descriptions (lengths, orientations and sequence numbers) of the lines constituting the corresponding edge structure. A parallel matching algorithm is used to match packed arrays of such descriptions to identify corresponding seed points in the two images. The matching algorithm is designed such that fragmentation and merging of ice floes are taken into account by accepting partial matches. The technique has been demonstrated to work on synthetic test patterns and real image pairs from SEASAT in times ranging from .5 to 0.7 seconds for 128 x 128 images.

  15. Formal Semanol Specification of Ada.

    DTIC Science & Technology

    1980-09-01

    concurrent task modeling involved very little change to the SEMANOL metalanguage. A primitive capable of initiating concurrent SEMANOL task processors...i.e., #CO-COMPUTE) and two primitivc-; corresponding to integer semaphores (i.c., #P and #V) were all that were required. In addition, these changes... synchronization techniques and choice of correct unblocking alternatives. We should note that it had been our original intention to use the Ada Translator program

  16. [An improved medical image fusion algorithm and quality evaluation].

    PubMed

    Chen, Meiling; Tao, Ling; Qian, Zhiyu

    2009-08-01

    Medical image fusion is of very important value for application in medical image analysis and diagnosis. In this paper, the conventional method of wavelet fusion is improved,so a new algorithm of medical image fusion is presented and the high frequency and low frequency coefficients are studied respectively. When high frequency coefficients are chosen, the regional edge intensities of each sub-image are calculated to realize adaptive fusion. The choice of low frequency coefficient is based on the edges of images, so that the fused image preserves all useful information and appears more distinctly. We apply the conventional and the improved fusion algorithms based on wavelet transform to fuse two images of human body and also evaluate the fusion results through a quality evaluation method. Experimental results show that this algorithm can effectively retain the details of information on original images and enhance their edge and texture features. This new algorithm is better than the conventional fusion algorithm based on wavelet transform.

  17. Assessing and evaluating multidisciplinary translational teams: a mixed methods approach.

    PubMed

    Wooten, Kevin C; Rose, Robert M; Ostir, Glenn V; Calhoun, William J; Ameredes, Bill T; Brasier, Allan R

    2014-03-01

    A case report illustrates how multidisciplinary translational teams can be assessed using outcome, process, and developmental types of evaluation using a mixed-methods approach. Types of evaluation appropriate for teams are considered in relation to relevant research questions and assessment methods. Logic models are applied to scientific projects and team development to inform choices between methods within a mixed-methods design. Use of an expert panel is reviewed, culminating in consensus ratings of 11 multidisciplinary teams and a final evaluation within a team-type taxonomy. Based on team maturation and scientific progress, teams were designated as (a) early in development, (b) traditional, (c) process focused, or (d) exemplary. Lessons learned from data reduction, use of mixed methods, and use of expert panels are explored.

  18. Dopamine, Effort-Based Choice, and Behavioral Economics: Basic and Translational Research

    PubMed Central

    Salamone, John D.; Correa, Merce; Yang, Jen-Hau; Rotolo, Renee; Presby, Rose

    2018-01-01

    Operant behavior is not only regulated by factors related to the quality or quantity of reinforcement, but also by the work requirements inherent in performing instrumental actions. Moreover, organisms often make effort-related decisions involving economic choices such as cost/benefit analyses. Effort-based decision making is studied using behavioral procedures that offer choices between high-effort options leading to relatively preferred reinforcers vs. low effort/low reward choices. Several neural systems, including the mesolimbic dopamine (DA) system and other brain circuits, are involved in regulating effort-related aspects of motivation. Considerable evidence indicates that mesolimbic DA transmission exerts a bi-directional control over exertion of effort on instrumental behavior tasks. Interference with DA transmission produces a low-effort bias in animals tested on effort-based choice tasks, while increasing DA transmission with drugs such as DA transport blockers tends to enhance selection of high-effort options. The results from these pharmacology studies are corroborated by the findings from recent articles using optogenetic, chemogenetic and physiological techniques. In addition to providing important information about the neural regulation of motivated behavior, effort-based choice tasks are useful for developing animal models of some of the motivational symptoms that are seen in people with various psychiatric and neurological disorders (e.g., depression, schizophrenia, Parkinson’s disease). Studies of effort-based decision making may ultimately contribute to the development of novel drug treatments for motivational dysfunction. PMID:29628879

  19. Dopamine, Effort-Based Choice, and Behavioral Economics: Basic and Translational Research.

    PubMed

    Salamone, John D; Correa, Merce; Yang, Jen-Hau; Rotolo, Renee; Presby, Rose

    2018-01-01

    Operant behavior is not only regulated by factors related to the quality or quantity of reinforcement, but also by the work requirements inherent in performing instrumental actions. Moreover, organisms often make effort-related decisions involving economic choices such as cost/benefit analyses. Effort-based decision making is studied using behavioral procedures that offer choices between high-effort options leading to relatively preferred reinforcers vs. low effort/low reward choices. Several neural systems, including the mesolimbic dopamine (DA) system and other brain circuits, are involved in regulating effort-related aspects of motivation. Considerable evidence indicates that mesolimbic DA transmission exerts a bi-directional control over exertion of effort on instrumental behavior tasks. Interference with DA transmission produces a low-effort bias in animals tested on effort-based choice tasks, while increasing DA transmission with drugs such as DA transport blockers tends to enhance selection of high-effort options. The results from these pharmacology studies are corroborated by the findings from recent articles using optogenetic, chemogenetic and physiological techniques. In addition to providing important information about the neural regulation of motivated behavior, effort-based choice tasks are useful for developing animal models of some of the motivational symptoms that are seen in people with various psychiatric and neurological disorders (e.g., depression, schizophrenia, Parkinson's disease). Studies of effort-based decision making may ultimately contribute to the development of novel drug treatments for motivational dysfunction.

  20. Notes on 'Bemächtigungstrieb' and Strachey's translation as 'instinct for mastery'.

    PubMed

    White, Kristin

    2010-08-01

    This short paper looks at Freud's use of the term 'Bemächtigungstrieb' and its translation by Strachey as 'instinct for mastery' when Freud was describing the motives behind his grandson's game with the wooden reel and string in Beyond the Pleasure Principle. The word 'Macht' [power], which is contained in the word 'Bemächtigung' points to Freud's difficult relationship with Alfred Adler, whose early theories on the aggressive drive and later theories on 'striving for power' were initially rejected by Freud. Looking at the changes in Freud's reception of Adlerian terms, some of which he later integrated into his own theory, throws light on his choice of the word 'Bemächtigungstrieb' in 1920, when he was just beginning to introduce his thoughts on the death instinct. A slightly different translation of the word 'Bemächtigungstrieb', one which takes these historical and theoretical aspects into account, could make these connections clearer for the English reader. Copyright © 2010 Institute of Psychoanalysis.

  1. Translational Modeling to Guide Study Design and Dose Choice in Obesity Exemplified by AZD1979, a Melanin-concentrating Hormone Receptor 1 Antagonist.

    PubMed

    Gennemark, P; Trägårdh, M; Lindén, D; Ploj, K; Johansson, A; Turnbull, A; Carlsson, B; Antonsson, M

    2017-07-01

    In this study, we present the translational modeling used in the discovery of AZD1979, a melanin-concentrating hormone receptor 1 (MCHr1) antagonist aimed for treatment of obesity. The model quantitatively connects the relevant biomarkers and thereby closes the scaling path from rodent to man, as well as from dose to effect level. The complexity of individual modeling steps depends on the quality and quantity of data as well as the prior information; from semimechanistic body-composition models to standard linear regression. Key predictions are obtained by standard forward simulation (e.g., predicting effect from exposure), as well as non-parametric input estimation (e.g., predicting energy intake from longitudinal body-weight data), across species. The work illustrates how modeling integrates data from several species, fills critical gaps between biomarkers, and supports experimental design and human dose-prediction. We believe this approach can be of general interest for translation in the obesity field, and might inspire translational reasoning more broadly. © 2017 The Authors CPT: Pharmacometrics & Systems Pharmacology published by Wiley Periodicals, Inc. on behalf of American Society for Clinical Pharmacology and Therapeutics.

  2. System and method for embedding emotion in logic systems

    NASA Technical Reports Server (NTRS)

    Curtis, Steven A. (Inventor)

    2012-01-01

    A system, method, and computer readable-media for creating a stable synthetic neural system. The method includes training an intellectual choice-driven synthetic neural system (SNS), training an emotional rule-driven SNS by generating emotions from rules, incorporating the rule-driven SNS into the choice-driven SNS through an evolvable interface, and balancing the emotional SNS and the intellectual SNS to achieve stability in a nontrivial autonomous environment with a Stability Algorithm for Neural Entities (SANE). Generating emotions from rules can include coding the rules into the rule-driven SNS in a self-consistent way. Training the emotional rule-driven SNS can occur during a training stage in parallel with training the choice-driven SNS. The training stage can include a self assessment loop which measures performance characteristics of the rule-driven SNS against core genetic code. The method uses a stability threshold to measure stability of the incorporated rule-driven SNS and choice-driven SNS using SANE.

  3. Learning relative values in the striatum induces violations of normative decision making

    PubMed Central

    Klein, Tilmann A.; Ullsperger, Markus; Jocham, Gerhard

    2017-01-01

    To decide optimally between available options, organisms need to learn the values associated with these options. Reinforcement learning models offer a powerful explanation of how these values are learnt from experience. However, human choices often violate normative principles. We suggest that seemingly counterintuitive decisions may arise as a natural consequence of the learning mechanisms deployed by humans. Here, using fMRI and a novel behavioural task, we show that, when suddenly switched to novel choice contexts, participants’ choices are incongruent with values learnt by standard learning algorithms. Instead, behaviour is compatible with the decisions of an agent learning how good an option is relative to an option with which it had previously been paired. Striatal activity exhibits the characteristics of a prediction error used to update such relative option values. Our data suggest that choices can be biased by a tendency to learn option values with reference to the available alternatives. PMID:28631734

  4. Bourdieu’s Cultural Capital in Relation to Food Choices: A Systematic Review of Cultural Capital Indicators and an Empirical Proof of Concept

    PubMed Central

    Kamphuis, Carlijn B. M.; Jansen, Tessa; Mackenbach, Johan P.; van Lenthe, Frank J.

    2015-01-01

    Objective Unhealthy food choices follow a socioeconomic gradient that may partly be explained by one’s ‘cultural capital’, as defined by Bourdieu. We aim 1) to carry out a systematic review to identify existing quantitative measures of cultural capital, 2) to develop a questionnaire to measure cultural capital for food choices, and 3) to empirically test associations of socioeconomic position with cultural capital and food choices, and of cultural capital with food choices. Design We systematically searched large databases for the key-word ‘cultural capital’ in title or abstract. Indicators of objectivised cultural capital and family institutionalised cultural capital, as identified by the review, were translated to food choice relevant indicators. For incorporated cultural capital, we used existing questionnaires that measured the concepts underlying the variety of indicators as identified by the review, i.e. participation, skills, knowledge, values. The questionnaire was empirically tested in a postal survey completed by 2,953 adults participating in the GLOBE cohort study, The Netherlands, in 2011. Results The review yielded 113 studies that fulfilled our inclusion criteria. Several indicators of family institutionalised (e.g. parents’ education completed) and objectivised cultural capital (e.g. possession of books, art) were consistently used. Incorporated cultural capital was measured with a large variety of indicators (e.g. cultural participation, skills). Based on this, we developed a questionnaire to measure cultural capital in relation to food choices. An empirical test of the questionnaire showed acceptable overall internal consistency (Cronbach’s alpha of .654; 56 items), and positive associations between socioeconomic position and cultural capital, and between cultural capital and healthy food choices. Conclusions Cultural capital may be a promising determinant for (socioeconomic inequalities in) food choices. PMID:26244763

  5. Bourdieu's Cultural Capital in Relation to Food Choices: A Systematic Review of Cultural Capital Indicators and an Empirical Proof of Concept.

    PubMed

    Kamphuis, Carlijn B M; Jansen, Tessa; Mackenbach, Johan P; van Lenthe, Frank J

    2015-01-01

    Unhealthy food choices follow a socioeconomic gradient that may partly be explained by one's 'cultural capital', as defined by Bourdieu. We aim 1) to carry out a systematic review to identify existing quantitative measures of cultural capital, 2) to develop a questionnaire to measure cultural capital for food choices, and 3) to empirically test associations of socioeconomic position with cultural capital and food choices, and of cultural capital with food choices. We systematically searched large databases for the key-word 'cultural capital' in title or abstract. Indicators of objectivised cultural capital and family institutionalised cultural capital, as identified by the review, were translated to food choice relevant indicators. For incorporated cultural capital, we used existing questionnaires that measured the concepts underlying the variety of indicators as identified by the review, i.e. participation, skills, knowledge, values. The questionnaire was empirically tested in a postal survey completed by 2,953 adults participating in the GLOBE cohort study, The Netherlands, in 2011. The review yielded 113 studies that fulfilled our inclusion criteria. Several indicators of family institutionalised (e.g. parents' education completed) and objectivised cultural capital (e.g. possession of books, art) were consistently used. Incorporated cultural capital was measured with a large variety of indicators (e.g. cultural participation, skills). Based on this, we developed a questionnaire to measure cultural capital in relation to food choices. An empirical test of the questionnaire showed acceptable overall internal consistency (Cronbach's alpha of .654; 56 items), and positive associations between socioeconomic position and cultural capital, and between cultural capital and healthy food choices. Cultural capital may be a promising determinant for (socioeconomic inequalities in) food choices.

  6. Influence of dose calculation algorithms on the predicted dose distribution and NTCP values for NSCLC patients.

    PubMed

    Nielsen, Tine B; Wieslander, Elinore; Fogliata, Antonella; Nielsen, Morten; Hansen, Olfred; Brink, Carsten

    2011-05-01

    To investigate differences in calculated doses and normal tissue complication probability (NTCP) values between different dose algorithms. Six dose algorithms from four different treatment planning systems were investigated: Eclipse AAA, Oncentra MasterPlan Collapsed Cone and Pencil Beam, Pinnacle Collapsed Cone and XiO Multigrid Superposition, and Fast Fourier Transform Convolution. Twenty NSCLC patients treated in the period 2001-2006 at the same accelerator were included and the accelerator used for treatments were modeled in the different systems. The treatment plans were recalculated with the same number of monitor units and beam arrangements across the dose algorithms. Dose volume histograms of the GTV, PTV, combined lungs (excluding the GTV), and heart were exported and evaluated. NTCP values for heart and lungs were calculated using the relative seriality model and the LKB model, respectively. Furthermore, NTCP for the lungs were calculated from two different model parameter sets. Calculations and evaluations were performed both including and excluding density corrections. There are found statistical significant differences between the calculated dose to heart, lung, and targets across the algorithms. Mean lung dose and V20 are not very sensitive to change between the investigated dose calculation algorithms. However, the different dose levels for the PTV averaged over the patient population are varying up to 11%. The predicted NTCP values for pneumonitis vary between 0.20 and 0.24 or 0.35 and 0.48 across the investigated dose algorithms depending on the chosen model parameter set. The influence of the use of density correction in the dose calculation on the predicted NTCP values depends on the specific dose calculation algorithm and the model parameter set. For fixed values of these, the changes in NTCP can be up to 45%. Calculated NTCP values for pneumonitis are more sensitive to the choice of algorithm than mean lung dose and V20 which are also commonly used for plan evaluation. The NTCP values for heart complication are, in this study, not very sensitive to the choice of algorithm. Dose calculations based on density corrections result in quite different NTCP values than calculations without density corrections. It is therefore important when working with NTCP planning to use NTCP parameter values based on calculations and treatments similar to those for which the NTCP is of interest.

  7. Preferences in Data Production Planning

    NASA Technical Reports Server (NTRS)

    Golden, Keith; Brafman, Ronen; Pang, Wanlin

    2005-01-01

    This paper discusses the data production problem, which consists of transforming a set of (initial) input data into a set of (goal) output data. There are typically many choices among input data and processing algorithms, each leading to significantly different end products. To discriminate among these choices, the planner supports an input language that provides a number of constructs for specifying user preferences over data (and plan) properties. We discuss these preference constructs, how we handle them to guide search, and additional challenges in the area of preference management that this important application domain offers.

  8. On the suitability of the connection machine for direct particle simulation

    NASA Technical Reports Server (NTRS)

    Dagum, Leonard

    1990-01-01

    The algorithmic structure was examined of the vectorizable Stanford particle simulation (SPS) method and the structure is reformulated in data parallel form. Some of the SPS algorithms can be directly translated to data parallel, but several of the vectorizable algorithms have no direct data parallel equivalent. This requires the development of new, strictly data parallel algorithms. In particular, a new sorting algorithm is developed to identify collision candidates in the simulation and a master/slave algorithm is developed to minimize communication cost in large table look up. Validation of the method is undertaken through test calculations for thermal relaxation of a gas, shock wave profiles, and shock reflection from a stationary wall. A qualitative measure is provided of the performance of the Connection Machine for direct particle simulation. The massively parallel architecture of the Connection Machine is found quite suitable for this type of calculation. However, there are difficulties in taking full advantage of this architecture because of lack of a broad based tradition of data parallel programming. An important outcome of this work has been new data parallel algorithms specifically of use for direct particle simulation but which also expand the data parallel diction.

  9. Direct adaptive fuzzy control of a translating piezoelectric flexible manipulator driven by a pneumatic rodless cylinder

    NASA Astrophysics Data System (ADS)

    Qiu, Zhi-cheng; Wang, Bin; Zhang, Xian-min; Han, Jian-da

    2013-04-01

    This study presents a novel translating piezoelectric flexible manipulator driven by a rodless cylinder. Simultaneous positioning control and vibration suppression of the flexible manipulator is accomplished by using a hybrid driving scheme composed of the pneumatic cylinder and a piezoelectric actuator. Pulse code modulation (PCM) method is utilized for the cylinder. First, the system dynamics model is derived, and its standard multiple input multiple output (MIMO) state-space representation is provided. Second, a composite proportional derivative (PD) control algorithms and a direct adaptive fuzzy control method are designed for the MIMO system. Also, a time delay compensation algorithm, bandstop and low-pass filters are utilized, under consideration of the control hysteresis and the caused high-frequency modal vibration due to the long stroke of the cylinder, gas compression and nonlinear factors of the pneumatic system. The convergence of the closed loop system is analyzed. Finally, experimental apparatus is constructed and experiments are conducted. The effectiveness of the designed controllers and the hybrid driving scheme is verified through simulation and experimental comparison studies. The numerical simulation and experimental results demonstrate that the proposed system scheme of employing the pneumatic drive and piezoelectric actuator can suppress the vibration and achieve the desired positioning location simultaneously. Furthermore, the adopted adaptive fuzzy control algorithms can significantly enhance the control performance.

  10. Bioinformatics in translational drug discovery.

    PubMed

    Wooller, Sarah K; Benstead-Hume, Graeme; Chen, Xiangrong; Ali, Yusuf; Pearl, Frances M G

    2017-08-31

    Bioinformatics approaches are becoming ever more essential in translational drug discovery both in academia and within the pharmaceutical industry. Computational exploitation of the increasing volumes of data generated during all phases of drug discovery is enabling key challenges of the process to be addressed. Here, we highlight some of the areas in which bioinformatics resources and methods are being developed to support the drug discovery pipeline. These include the creation of large data warehouses, bioinformatics algorithms to analyse 'big data' that identify novel drug targets and/or biomarkers, programs to assess the tractability of targets, and prediction of repositioning opportunities that use licensed drugs to treat additional indications. © 2017 The Author(s).

  11. AUTOMATIC GENERATION OF FFT FOR TRANSLATIONS OF MULTIPOLE EXPANSIONS IN SPHERICAL HARMONICS

    PubMed Central

    Mirkovic, Dragan; Pettitt, B. Montgomery; Johnsson, S. Lennart

    2009-01-01

    The fast multipole method (FMM) is an efficient algorithm for calculating electrostatic interactions in molecular simulations and a promising alternative to Ewald summation methods. Translation of multipole expansion in spherical harmonics is the most important operation of the fast multipole method and the fast Fourier transform (FFT) acceleration of this operation is among the fastest methods of improving its performance. The technique relies on highly optimized implementation of fast Fourier transform routines for the desired expansion sizes, which need to incorporate the knowledge of symmetries and zero elements in the input arrays. Here a method is presented for automatic generation of such, highly optimized, routines. PMID:19763233

  12. Wavelets for sign language translation

    NASA Astrophysics Data System (ADS)

    Wilson, Beth J.; Anspach, Gretel

    1993-10-01

    Wavelet techniques are applied to help extract the relevant parameters of sign language from video images of a person communicating in American Sign Language or Signed English. The compression and edge detection features of two-dimensional wavelet analysis are exploited to enhance the algorithms under development to classify the hand motion, hand location with respect to the body, and handshape. These three parameters have different processing requirements and complexity issues. The results are described for applying various quadrature mirror filter designs to a filterbank implementation of the desired wavelet transform. The overall project is to develop a system that will translate sign language to English to facilitate communication between deaf and hearing people.

  13. Nonintegrable Schrodinger discrete breathers.

    PubMed

    Gómez-Gardeñes, J; Floría, L M; Peyrard, M; Bishop, A R

    2004-12-01

    In an extensive numerical investigation of nonintegrable translational motion of discrete breathers in nonlinear Schrödinger lattices, we have used a regularized Newton algorithm to continue these solutions from the limit of the integrable Ablowitz-Ladik lattice. These solutions are shown to be a superposition of a localized moving core and an excited extended state (background) to which the localized moving pulse is spatially asymptotic. The background is a linear combination of small amplitude nonlinear resonant plane waves and it plays an essential role in the energy balance governing the translational motion of the localized core. Perturbative collective variable theory predictions are critically analyzed in the light of the numerical results.

  14. Evaluating Recommendation Systems

    NASA Astrophysics Data System (ADS)

    Shani, Guy; Gunawardana, Asela

    Recommender systems are now popular both commercially and in the research community, where many approaches have been suggested for providing recommendations. In many cases a system designer that wishes to employ a recommendation system must choose between a set of candidate approaches. A first step towards selecting an appropriate algorithm is to decide which properties of the application to focus upon when making this choice. Indeed, recommendation systems have a variety of properties that may affect user experience, such as accuracy, robustness, scalability, and so forth. In this paper we discuss how to compare recommenders based on a set of properties that are relevant for the application. We focus on comparative studies, where a few algorithms are compared using some evaluation metric, rather than absolute benchmarking of algorithms. We describe experimental settings appropriate for making choices between algorithms. We review three types of experiments, starting with an offline setting, where recommendation approaches are compared without user interaction, then reviewing user studies, where a small group of subjects experiment with the system and report on the experience, and finally describe large scale online experiments, where real user populations interact with the system. In each of these cases we describe types of questions that can be answered, and suggest protocols for experimentation. We also discuss how to draw trustworthy conclusions from the conducted experiments. We then review a large set of properties, and explain how to evaluate systems given relevant properties. We also survey a large set of evaluation metrics in the context of the properties that they evaluate.

  15. TIP: protein backtranslation aided by genetic algorithms.

    PubMed

    Moreira, Andrés; Maass, Alejandro

    2004-09-01

    Several applications require the backtranslation of a protein sequence into a nucleic acid sequence. The degeneracy of the genetic code makes this process ambiguous; moreover, not every translation is equally viable. The usual answer is to mimic the codon usage of the target species; however, this does not capture all the relevant features of the 'genomic styles' from different taxa. The program TIP ' Traducción Inversa de Proteínas') applies genetic algorithms to improve the backtranslation, by minimizing the difference of some coding statistics with respect to their average value in the target. http://www.cmm.uchile.cl/genoma/tip/

  16. On-orbit flight control algorithm description

    NASA Technical Reports Server (NTRS)

    1975-01-01

    Algorithms are presented for rotational and translational control of the space shuttle orbiter in the orbital mission phases, which are external tank separation, orbit insertion, on-orbit and de-orbit. The program provides a versatile control system structure while maintaining uniform communications with other programs, sensors, and control effectors by using an executive routine/functional subroutine format. Software functional requirements are described using block diagrams where feasible, and input--output tables, and the software implementation of each function is presented in equations and structured flow charts. Included are a glossary of all symbols used to define the requirements, and an appendix of supportive material.

  17. VDA, a Method of Choosing a Better Algorithm with Fewer Validations

    PubMed Central

    Kluger, Yuval

    2011-01-01

    The multitude of bioinformatics algorithms designed for performing a particular computational task presents end-users with the problem of selecting the most appropriate computational tool for analyzing their biological data. The choice of the best available method is often based on expensive experimental validation of the results. We propose an approach to design validation sets for method comparison and performance assessment that are effective in terms of cost and discrimination power. Validation Discriminant Analysis (VDA) is a method for designing a minimal validation dataset to allow reliable comparisons between the performances of different algorithms. Implementation of our VDA approach achieves this reduction by selecting predictions that maximize the minimum Hamming distance between algorithmic predictions in the validation set. We show that VDA can be used to correctly rank algorithms according to their performances. These results are further supported by simulations and by realistic algorithmic comparisons in silico. VDA is a novel, cost-efficient method for minimizing the number of validation experiments necessary for reliable performance estimation and fair comparison between algorithms. Our VDA software is available at http://sourceforge.net/projects/klugerlab/files/VDA/ PMID:22046256

  18. Teaching iSTART to Understand Spanish

    ERIC Educational Resources Information Center

    Dascalu, Mihai; Jacovina, Matthew E.; Soto, Christian M.; Allen, Laura K.; Dai, Jianmin; Guerrero, Tricia A.; McNamara, Danielle S.

    2017-01-01

    iSTART is a web-based reading comprehension tutor. A recent translation of iSTART from English to Spanish has made the system available to a new audience. In this paper, we outline several challenges that arose during the development process, specifically focusing on the algorithms that drive the feedback. Several iSTART activities encourage…

  19. Systems approach for the selection of micro-RNAs as therapeutic biomarkers of anti-EGFR monoclonal antibody treatment in colorectal cancer

    NASA Astrophysics Data System (ADS)

    Deyati, Avisek; Bagewadi, Shweta; Senger, Philipp; Hofmann-Apitius, Martin; Novac, Natalia

    2015-01-01

    miRNA plays an important role in tumourgenesis by regulating expression of oncogenes and tumour suppressors. Thus affects cell proliferation and differentiation, apoptosis, invasion and angiogenesis. miRNAs are potential biomarkers for diagnosis, prognosis and therapies of different forms of cancer. However, relationship between response of cancer patients towards targeted therapy and the resulting modifications of the miRNA transcriptome in the context of pathway regulation is poorly understood. With ever-increasing pathways and miRNA-mRNA interaction databases, freely available mRNA and miRNA expression data in multiple cancer therapy have produced an unprecedented opportunity to decipher the role of miRNAs in early prediction of therapeutic efficacy in diseases. Efficient translation of -omics data and accumulated knowledge to clinical decision-making are of paramount scientific and public health interest. Well-structured translational algorithms are needed to bridge the gap from databases to decisions. Herein, we present a novel SMARTmiR algorithm to prospectively predict the role of miRNA as therapeutic biomarker for an anti-EGFR monoclonal antibody i.e. cetuximab treatment in colorectal cancer.

  20. R3D: Reduction Package for Integral Field Spectroscopy

    NASA Astrophysics Data System (ADS)

    Sánchez, Sebastián. F.

    2011-06-01

    R3D was developed to reduce fiber-based integral field spectroscopy (IFS) data. The package comprises a set of command-line routines adapted for each of these steps, suitable for creating pipelines. The routines have been tested against simulations, and against real data from various integral field spectrographs (PMAS, PPAK, GMOS, VIMOS and INTEGRAL). Particular attention is paid to the treatment of cross-talk. R3D unifies the reduction techniques for the different IFS instruments to a single one, in order to allow the general public to reduce different instruments data in an homogeneus, consistent and simple way. Although still in its prototyping phase, it has been proved to be useful to reduce PMAS (both in the Larr and the PPAK modes), VIMOS and INTEGRAL data. The current version has been coded in Perl, using PDL, in order to speed-up the algorithm testing phase. Most of the time critical algorithms have been translated to C[float=][/float], and it is our intention to translate all of them. However, even in this phase R3D is fast enough to produce valuable science frames in reasonable time.

  1. Development and validation of an online interactive, multimedia wound care algorithms program.

    PubMed

    Beitz, Janice M; van Rijswijk, Lia

    2012-01-01

    To provide education based on evidence-based and validated wound care algorithms we designed and implemented an interactive, Web-based learning program for teaching wound care. A mixed methods quantitative pilot study design with qualitative components was used to test and ascertain the ease of use, validity, and reliability of the online program. A convenience sample of 56 RN wound experts (formally educated, certified in wound care, or both) participated. The interactive, online program consists of a user introduction, interactive assessment of 15 acute and chronic wound photos, user feedback about the percentage correct, partially correct, or incorrect algorithm and dressing choices and a user survey. After giving consent, participants accessed the online program, provided answers to the demographic survey, and completed the assessment module and photographic test, along with a posttest survey. The construct validity of the online interactive program was strong. Eighty-five percent (85%) of algorithm and 87% of dressing choices were fully correct even though some programming design issues were identified. Online study results were consistently better than previously conducted comparable paper-pencil study results. Using a 5-point Likert-type scale, participants rated the program's value and ease of use as 3.88 (valuable to very valuable) and 3.97 (easy to very easy), respectively. Similarly the research process was described qualitatively as "enjoyable" and "exciting." This digital program was well received indicating its "perceived benefits" for nonexpert users, which may help reduce barriers to implementing safe, evidence-based care. Ongoing research using larger sample sizes may help refine the program or algorithms while identifying clinician educational needs. Initial design imperfections and programming problems identified also underscored the importance of testing all paper and Web-based programs designed to educate health care professionals or guide patient care.

  2. Formal language constrained path problems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Barrett, C.; Jacob, R.; Marathe, M.

    1997-07-08

    In many path finding problems arising in practice, certain patterns of edge/vertex labels in the labeled graph being traversed are allowed/preferred, while others are disallowed. Motivated by such applications as intermodal transportation planning, the authors investigate the complexity of finding feasible paths in a labeled network, where the mode choice for each traveler is specified by a formal language. The main contributions of this paper include the following: (1) the authors show that the problem of finding a shortest path between a source and destination for a traveler whose mode choice is specified as a context free language is solvablemore » efficiently in polynomial time, when the mode choice is specified as a regular language they provide algorithms with improved space and time bounds; (2) in contrast, they show that the problem of finding simple paths between a source and a given destination is NP-hard, even when restricted to very simple regular expressions and/or very simple graphs; (3) for the class of treewidth bounded graphs, they show that (i) the problem of finding a regular language constrained simple path between source and a destination is solvable in polynomial time and (ii) the extension to finding context free language constrained simple paths is NP-complete. Several extensions of these results are presented in the context of finding shortest paths with additional constraints. These results significantly extend the results in [MW95]. As a corollary of the results, they obtain a polynomial time algorithm for the BEST k-SIMILAR PATH problem studied in [SJB97]. The previous best algorithm was given by [SJB97] and takes exponential time in the worst case.« less

  3. TranslatomeDB: a comprehensive database and cloud-based analysis platform for translatome sequencing data.

    PubMed

    Liu, Wanting; Xiang, Lunping; Zheng, Tingkai; Jin, Jingjie; Zhang, Gong

    2018-01-04

    Translation is a key regulatory step, linking transcriptome and proteome. Two major methods of translatome investigations are RNC-seq (sequencing of translating mRNA) and Ribo-seq (ribosome profiling). To facilitate the investigation of translation, we built a comprehensive database TranslatomeDB (http://www.translatomedb.net/) which provides collection and integrated analysis of published and user-generated translatome sequencing data. The current version includes 2453 Ribo-seq, 10 RNC-seq and their 1394 corresponding mRNA-seq datasets in 13 species. The database emphasizes the analysis functions in addition to the dataset collections. Differential gene expression (DGE) analysis can be performed between any two datasets of same species and type, both on transcriptome and translatome levels. The translation indices translation ratios, elongation velocity index and translational efficiency can be calculated to quantitatively evaluate translational initiation efficiency and elongation velocity, respectively. All datasets were analyzed using a unified, robust, accurate and experimentally-verifiable pipeline based on the FANSe3 mapping algorithm and edgeR for DGE analyzes. TranslatomeDB also allows users to upload their own datasets and utilize the identical unified pipeline to analyze their data. We believe that our TranslatomeDB is a comprehensive platform and knowledgebase on translatome and proteome research, releasing the biologists from complex searching, analyzing and comparing huge sequencing data without needing local computational power. © The Author(s) 2017. Published by Oxford University Press on behalf of Nucleic Acids Research.

  4. Jitter Correction

    NASA Technical Reports Server (NTRS)

    Waegell, Mordecai J.; Palacios, David M.

    2011-01-01

    Jitter_Correct.m is a MATLAB function that automatically measures and corrects inter-frame jitter in an image sequence to a user-specified precision. In addition, the algorithm dynamically adjusts the image sample size to increase the accuracy of the measurement. The Jitter_Correct.m function takes an image sequence with unknown frame-to-frame jitter and computes the translations of each frame (column and row, in pixels) relative to a chosen reference frame with sub-pixel accuracy. The translations are measured using a Cross Correlation Fourier transformation method in which the relative phase of the two transformed images is fit to a plane. The measured translations are then used to correct the inter-frame jitter of the image sequence. The function also dynamically expands the image sample size over which the cross-correlation is measured to increase the accuracy of the measurement. This increases the robustness of the measurement to variable magnitudes of inter-frame jitter

  5. Three-Dimensional ISAR Imaging Method for High-Speed Targets in Short-Range Using Impulse Radar Based on SIMO Array

    PubMed Central

    Zhou, Xinpeng; Wei, Guohua; Wu, Siliang; Wang, Dawei

    2016-01-01

    This paper proposes a three-dimensional inverse synthetic aperture radar (ISAR) imaging method for high-speed targets in short-range using an impulse radar. According to the requirements for high-speed target measurement in short-range, this paper establishes the single-input multiple-output (SIMO) antenna array, and further proposes a missile motion parameter estimation method based on impulse radar. By analyzing the motion geometry relationship of the warhead scattering center after translational compensation, this paper derives the receiving antenna position and the time delay after translational compensation, and thus overcomes the shortcomings of conventional translational compensation methods. By analyzing the motion characteristics of the missile, this paper estimates the missile’s rotation angle and the rotation matrix by establishing a new coordinate system. Simulation results validate the performance of the proposed algorithm. PMID:26978372

  6. Data depth based clustering analysis

    DOE PAGES

    Jeong, Myeong -Hun; Cai, Yaping; Sullivan, Clair J.; ...

    2016-01-01

    Here, this paper proposes a new algorithm for identifying patterns within data, based on data depth. Such a clustering analysis has an enormous potential to discover previously unknown insights from existing data sets. Many clustering algorithms already exist for this purpose. However, most algorithms are not affine invariant. Therefore, they must operate with different parameters after the data sets are rotated, scaled, or translated. Further, most clustering algorithms, based on Euclidean distance, can be sensitive to noises because they have no global perspective. Parameter selection also significantly affects the clustering results of each algorithm. Unlike many existing clustering algorithms, themore » proposed algorithm, called data depth based clustering analysis (DBCA), is able to detect coherent clusters after the data sets are affine transformed without changing a parameter. It is also robust to noises because using data depth can measure centrality and outlyingness of the underlying data. Further, it can generate relatively stable clusters by varying the parameter. The experimental comparison with the leading state-of-the-art alternatives demonstrates that the proposed algorithm outperforms DBSCAN and HDBSCAN in terms of affine invariance, and exceeds or matches the ro-bustness to noises of DBSCAN or HDBSCAN. The robust-ness to parameter selection is also demonstrated through the case study of clustering twitter data.« less

  7. Generalized algebraic scene-based nonuniformity correction algorithm.

    PubMed

    Ratliff, Bradley M; Hayat, Majeed M; Tyo, J Scott

    2005-02-01

    A generalization of a recently developed algebraic scene-based nonuniformity correction algorithm for focal plane array (FPA) sensors is presented. The new technique uses pairs of image frames exhibiting arbitrary one- or two-dimensional translational motion to compute compensator quantities that are then used to remove nonuniformity in the bias of the FPA response. Unlike its predecessor, the generalization does not require the use of either a blackbody calibration target or a shutter. The algorithm has a low computational overhead, lending itself to real-time hardware implementation. The high-quality correction ability of this technique is demonstrated through application to real IR data from both cooled and uncooled infrared FPAs. A theoretical and experimental error analysis is performed to study the accuracy of the bias compensator estimates in the presence of two main sources of error.

  8. A Global Approach to the Optimal Trajectory Based on an Improved Ant Colony Algorithm for Cold Spray

    NASA Astrophysics Data System (ADS)

    Cai, Zhenhua; Chen, Tingyang; Zeng, Chunnian; Guo, Xueping; Lian, Huijuan; Zheng, You; Wei, Xiaoxu

    2016-12-01

    This paper is concerned with finding a global approach to obtain the shortest complete coverage trajectory on complex surfaces for cold spray applications. A slicing algorithm is employed to decompose the free-form complex surface into several small pieces of simple topological type. The problem of finding the optimal arrangement of the pieces is translated into a generalized traveling salesman problem (GTSP). Owing to its high searching capability and convergence performance, an improved ant colony algorithm is then used to solve the GTSP. Through off-line simulation, a robot trajectory is generated based on the optimized result. The approach is applied to coat real components with a complex surface by using the cold spray system with copper as the spraying material.

  9. On improving linear solver performance: a block variant of GMRES

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Baker, A H; Dennis, J M; Jessup, E R

    2004-05-10

    The increasing gap between processor performance and memory access time warrants the re-examination of data movement in iterative linear solver algorithms. For this reason, we explore and establish the feasibility of modifying a standard iterative linear solver algorithm in a manner that reduces the movement of data through memory. In particular, we present an alternative to the restarted GMRES algorithm for solving a single right-hand side linear system Ax = b based on solving the block linear system AX = B. Algorithm performance, i.e. time to solution, is improved by using the matrix A in operations on groups of vectors.more » Experimental results demonstrate the importance of implementation choices on data movement as well as the effectiveness of the new method on a variety of problems from different application areas.« less

  10. SU-E-J-91: FFT Based Medical Image Registration Using a Graphics Processing Unit (GPU).

    PubMed

    Luce, J; Hoggarth, M; Lin, J; Block, A; Roeske, J

    2012-06-01

    To evaluate the efficiency gains obtained from using a Graphics Processing Unit (GPU) to perform a Fourier Transform (FT) based image registration. Fourier-based image registration involves obtaining the FT of the component images, and analyzing them in Fourier space to determine the translations and rotations of one image set relative to another. An important property of FT registration is that by enlarging the images (adding additional pixels), one can obtain translations and rotations with sub-pixel resolution. The expense, however, is an increased computational time. GPUs may decrease the computational time associated with FT image registration by taking advantage of their parallel architecture to perform matrix computations much more efficiently than a Central Processor Unit (CPU). In order to evaluate the computational gains produced by a GPU, images with known translational shifts were utilized. A program was written in the Interactive Data Language (IDL; Exelis, Boulder, CO) to performCPU-based calculations. Subsequently, the program was modified using GPU bindings (Tech-X, Boulder, CO) to perform GPU-based computation on the same system. Multiple image sizes were used, ranging from 256×256 to 2304×2304. The time required to complete the full algorithm by the CPU and GPU were benchmarked and the speed increase was defined as the ratio of the CPU-to-GPU computational time. The ratio of the CPU-to- GPU time was greater than 1.0 for all images, which indicates the GPU is performing the algorithm faster than the CPU. The smallest improvement, a 1.21 ratio, was found with the smallest image size of 256×256, and the largest speedup, a 4.25 ratio, was observed with the largest image size of 2304×2304. GPU programming resulted in a significant decrease in computational time associated with a FT image registration algorithm. The inclusion of the GPU may provide near real-time, sub-pixel registration capability. © 2012 American Association of Physicists in Medicine.

  11. Real-time multiple-objective path search for in-vehicle route guidance systems

    DOT National Transportation Integrated Search

    1997-01-01

    The application of multiple-objective route choice for in-vehicle route guidance systems is discussed. A bi-objective path search algorithm is presented and its use demonstrated. A concept of trip quality is introduced that is composed of two objecti...

  12. Evaluating the administration costs of biologic drugs: development of a cost algorithm.

    PubMed

    Tetteh, Ebenezer K; Morris, Stephen

    2014-12-01

    Biologic drugs, as with all other medical technologies, are subject to a number of regulatory, marketing, reimbursement (financing) and other demand-restricting hurdles applied by healthcare payers. One example is the routine use of cost-effectiveness analyses or health technology assessments to determine which medical technologies offer value-for-money. The manner in which these assessments are conducted suggests that, holding all else equal, the economic value of biologic drugs may be determined by how much is spent on administering these drugs or trade-offs between drug acquisition and administration costs. Yet, on the supply-side, it seems very little attention is given to how manufacturing and formulation choices affect healthcare delivery costs. This paper evaluates variations in the administration costs of biologic drugs, taking care to ensure consistent inclusion of all relevant cost resources. From this, it develops a regression-based algorithm with which manufacturers could possibly predict, during process development, how their manufacturing and formulation choices may impact on the healthcare delivery costs of their products.

  13. Identification and validation of loss of function variants in clinical contexts.

    PubMed

    Lescai, Francesco; Marasco, Elena; Bacchelli, Chiara; Stanier, Philip; Mantovani, Vilma; Beales, Philip

    2014-01-01

    The choice of an appropriate variant calling pipeline for exome sequencing data is becoming increasingly more important in translational medicine projects and clinical contexts. Within GOSgene, which facilitates genetic analysis as part of a joint effort of the University College London and the Great Ormond Street Hospital, we aimed to optimize a variant calling pipeline suitable for our clinical context. We implemented the GATK/Queue framework and evaluated the performance of its two callers: the classical UnifiedGenotyper and the new variant discovery tool HaplotypeCaller. We performed an experimental validation of the loss-of-function (LoF) variants called by the two methods using Sequenom technology. UnifiedGenotyper showed a total validation rate of 97.6% for LoF single-nucleotide polymorphisms (SNPs) and 92.0% for insertions or deletions (INDELs), whereas HaplotypeCaller was 91.7% for SNPs and 55.9% for INDELs. We confirm that GATK/Queue is a reliable pipeline in translational medicine and clinical context. We conclude that in our working environment, UnifiedGenotyper is the caller of choice, being an accurate method, with a high validation rate of error-prone calls like LoF variants. We finally highlight the importance of experimental validation, especially for INDELs, as part of a standard pipeline in clinical environments.

  14. Comparison of algorithms of testing for use in automated evaluation of sensation.

    PubMed

    Dyck, P J; Karnes, J L; Gillen, D A; O'Brien, P C; Zimmerman, I R; Johnson, D M

    1990-10-01

    Estimates of vibratory detection threshold may be used to detect, characterize, and follow the course of sensory abnormality in neurologic disease. The approach is especially useful in epidemiologic and controlled clinical trials. We studied which algorithm of testing and finding threshold should be used in automatic systems by comparing among algorithms and stimulus conditions for the index finger of healthy subjects and for the great toe of patients with mild neuropathy. Appearance thresholds obtained by linear ramps increasing at a rate less than 4.15 microns/sec provided accurate and repeatable thresholds compared with thresholds obtained by forced-choice testing. These rates would be acceptable if only sensitive sites were studied, but they were too slow for use in automatic testing of insensitive parts. Appearance thresholds obtained by fast linear rates (4.15 or 16.6 microns/sec) overestimated threshold, especially for sensitive parts. Use of the mean of appearance and disappearance thresholds, with the stimulus increasing exponentially at rates of 0.5 or 1.0 just noticeable difference (JND) units per second, and interspersion of null stimuli, Békésy with null stimuli, provided accurate, repeatable, and fast estimates of threshold for sensitive parts. Despite the good performance of Békésy testing, we prefer forced choice for evaluation of the sensation of patients with neuropathy.

  15. Optimal Control for Aperiodic Dual-Rate Systems With Time-Varying Delays

    PubMed Central

    Salt, Julián; Guinaldo, María; Chacón, Jesús

    2018-01-01

    In this work, we consider a dual-rate scenario with slow input and fast output. Our objective is the maximization of the decay rate of the system through the suitable choice of the n-input signals between two measures (periodic sampling) and their times of application. The optimization algorithm is extended for time-varying delays in order to make possible its implementation in networked control systems. We provide experimental results in an air levitation system to verify the validity of the algorithm in a real plant. PMID:29747441

  16. Optimal Control for Aperiodic Dual-Rate Systems With Time-Varying Delays.

    PubMed

    Aranda-Escolástico, Ernesto; Salt, Julián; Guinaldo, María; Chacón, Jesús; Dormido, Sebastián

    2018-05-09

    In this work, we consider a dual-rate scenario with slow input and fast output. Our objective is the maximization of the decay rate of the system through the suitable choice of the n -input signals between two measures (periodic sampling) and their times of application. The optimization algorithm is extended for time-varying delays in order to make possible its implementation in networked control systems. We provide experimental results in an air levitation system to verify the validity of the algorithm in a real plant.

  17. Metis: A Pure Metropolis Markov Chain Monte Carlo Bayesian Inference Library

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bates, Cameron Russell; Mckigney, Edward Allen

    The use of Bayesian inference in data analysis has become the standard for large scienti c experiments [1, 2]. The Monte Carlo Codes Group(XCP-3) at Los Alamos has developed a simple set of algorithms currently implemented in C++ and Python to easily perform at-prior Markov Chain Monte Carlo Bayesian inference with pure Metropolis sampling. These implementations are designed to be user friendly and extensible for customization based on speci c application requirements. This document describes the algorithmic choices made and presents two use cases.

  18. Use of Preclinical Drug vs. Food Choice Procedures to Evaluate Candidate Medications for Cocaine Addiction.

    PubMed

    Banks, Matthew L; Hutsell, Blake A; Schwienteck, Kathryn L; Negus, S Stevens

    2015-06-01

    Drug addiction is a disease that manifests as an inappropriate allocation of behavior towards the procurement and use of the abused substance and away from other behaviors that produce more adaptive reinforcers (e.g. exercise, work, family and social relationships). The goal of treating drug addiction is not only to decrease drug-maintained behaviors, but also to promote a reallocation of behavior towards alternative, nondrug reinforcers. Experimental procedures that offer concurrent access to both a drug reinforcer and an alternative, nondrug reinforcer provide a research tool for assessment of medication effects on drug choice and behavioral allocation. Choice procedures are currently the standard in human laboratory research on medications development. Preclinical choice procedures have been utilized in biomedical research since the early 1940's, and during the last 10-15 years, their use for evaluation of medications to treat drug addiction has increased. We propose here that parallel use of choice procedures in preclinical and clinical studies will facilitate translational research on development of medications to treat cocaine addiction. In support of this proposition, a review of the literature suggests strong concordance between preclinical effectiveness of candidate medications to modify cocaine choice in nonhuman primates and rodents and clinical effectiveness of these medications to modify either cocaine choice in human laboratory studies or metrics of cocaine abuse in patients with cocaine use disorder. The strongest evidence for medication effectiveness in preclinical choice studies has been obtained with maintenance on the monoamine releaser d -amphetamine, a candidate agonist medication for cocaine use analogous to use of methadone to treat heroin abuse or nicotine formulations to treat tobacco dependence.

  19. Use of Preclinical Drug vs. Food Choice Procedures to Evaluate Candidate Medications for Cocaine Addiction

    PubMed Central

    Banks, Matthew L; Hutsell, Blake A; Schwienteck, Kathryn L; Negus, S. Stevens

    2015-01-01

    Opinion Statement Drug addiction is a disease that manifests as an inappropriate allocation of behavior towards the procurement and use of the abused substance and away from other behaviors that produce more adaptive reinforcers (e.g. exercise, work, family and social relationships). The goal of treating drug addiction is not only to decrease drug-maintained behaviors, but also to promote a reallocation of behavior towards alternative, nondrug reinforcers. Experimental procedures that offer concurrent access to both a drug reinforcer and an alternative, nondrug reinforcer provide a research tool for assessment of medication effects on drug choice and behavioral allocation. Choice procedures are currently the standard in human laboratory research on medications development. Preclinical choice procedures have been utilized in biomedical research since the early 1940’s, and during the last 10–15 years, their use for evaluation of medications to treat drug addiction has increased. We propose here that parallel use of choice procedures in preclinical and clinical studies will facilitate translational research on development of medications to treat cocaine addiction. In support of this proposition, a review of the literature suggests strong concordance between preclinical effectiveness of candidate medications to modify cocaine choice in nonhuman primates and rodents and clinical effectiveness of these medications to modify either cocaine choice in human laboratory studies or metrics of cocaine abuse in patients with cocaine use disorder. The strongest evidence for medication effectiveness in preclinical choice studies has been obtained with maintenance on the monoamine releaser d-amphetamine, a candidate agonist medication for cocaine use analogous to use of methadone to treat heroin abuse or nicotine formulations to treat tobacco dependence. PMID:26009706

  20. Covert rapid action-memory simulation (CRAMS): a hypothesis of hippocampal-prefrontal interactions for adaptive behavior.

    PubMed

    Wang, Jane X; Cohen, Neal J; Voss, Joel L

    2015-01-01

    Effective choices generally require memory, yet little is known regarding the cognitive or neural mechanisms that allow memory to influence choices. We outline a new framework proposing that covert memory processing of hippocampus interacts with action-generation processing of prefrontal cortex in order to arrive at optimal, memory-guided choices. Covert, rapid action-memory simulation (CRAMS) is proposed here as a framework for understanding cognitive and/or behavioral choices, whereby prefrontal-hippocampal interactions quickly provide multiple simulations of potential outcomes used to evaluate the set of possible choices. We hypothesize that this CRAMS process is automatic, obligatory, and covert, meaning that many cycles of action-memory simulation occur in response to choice conflict without an individual's necessary intention and generally without awareness of the simulations, leading to adaptive behavior with little perceived effort. CRAMS is thus distinct from influential proposals that adaptive memory-based behavior in humans requires consciously experienced memory-based construction of possible future scenarios and deliberate decisions among possible future constructions. CRAMS provides an account of why hippocampus has been shown to make critical contributions to the short-term control of behavior, and it motivates several new experimental approaches and hypotheses that could be used to better understand the ubiquitous role of prefrontal-hippocampal interactions in situations that require adaptively using memory to guide choices. Importantly, this framework provides a perspective that allows for testing decision-making mechanisms in a manner that translates well across human and nonhuman animal model systems. Copyright © 2014 Elsevier Inc. All rights reserved.

  1. Assessing and Evaluating Multidisciplinary Translational Teams: A Mixed Methods Approach

    PubMed Central

    Wooten, Kevin C.; Rose, Robert M.; Ostir, Glenn V.; Calhoun, William J.; Ameredes, Bill T.; Brasier, Allan R.

    2014-01-01

    A case report illustrates how multidisciplinary translational teams can be assessed using outcome, process, and developmental types of evaluation using a mixed methods approach. Types of evaluation appropriate for teams are considered in relation to relevant research questions and assessment methods. Logic models are applied to scientific projects and team development to inform choices between methods within a mixed methods design. Use of an expert panel is reviewed, culminating in consensus ratings of 11 multidisciplinary teams and a final evaluation within a team type taxonomy. Based on team maturation and scientific progress, teams were designated as: a) early in development, b) traditional, c) process focused, or d) exemplary. Lessons learned from data reduction, use of mixed methods, and use of expert panels are explored. PMID:24064432

  2. Policy interventions to promote healthy eating: a review of what works, what does not, and what is promising.

    PubMed

    Brambila-Macias, Jose; Shankar, Bhavani; Capacci, Sara; Mazzocchi, Mario; Perez-Cueto, Federico J A; Verbeke, Wim; Traill, W Bruce

    2011-12-01

    Unhealthy diets can lead to various diseases, which in turn can translate into a bigger burden for the state in the form of health services and lost production. Obesity alone has enormous costs and claims thousands of lives every year. Although diet quality in the European Union has improved across countries, it still falls well short of conformity with the World Health Organization dietary guidelines. In this review, we classify types of policy interventions addressing healthy eating and identify through a literature review what specific policy interventions are better suited to improve diets. Policy interventions are classified into two broad categories: information measures and measures targeting the market environment. Using this classification, we summarize a number of previous systematic reviews, academic papers, and institutional reports and draw some conclusions about their effectiveness. Of the information measures, policy interventions aimed at reducing or banning unhealthy food advertisements generally have had a weak positive effect on improving diets, while public information campaigns have been successful in raising awareness of unhealthy eating but have failed to translate the message into action. Nutritional labeling allows for informed choice. However, informed choice is not necessarily healthier; knowing or being able to read and interpret nutritional labeling on food purchased does not necessarily result in consumption of healthier foods. Interventions targeting the market environment, such as fiscal measures and nutrient, food, and diet standards, are rarer and generally more effective, though more intrusive. Overall, we conclude that measures to support informed choice have a mixed and limited record of success. On the other hand, measures to target the market environment are more intrusive but may be more effective.

  3. Recursion Removal as an Instructional Method to Enhance the Understanding of Recursion Tracing

    ERIC Educational Resources Information Center

    Velázquez-Iturbide, J. Ángel; Castellanos, M. Eugenia; Hijón-Neira, Raquel

    2016-01-01

    Recursion is one of the most difficult programming topics for students. In this paper, an instructional method is proposed to enhance students' understanding of recursion tracing. The proposal is based on the use of rules to translate linear recursion algorithms into equivalent, iterative ones. The paper has two main contributions: the…

  4. Developing fire management mixes for fire program planning

    Treesearch

    Armando González-Cabán; Patricia B. Shinkle; Thomas J. Mills

    1986-01-01

    Evaluating economic efficiency of fire management program options requires information on the firefighting inputs, such as vehicles and crews, that would be needed to execute the program option selected. An algorithm was developed to translate automatically dollars allocated to type of firefighting inputs to numbers of units, using a set of weights for a specific fire...

  5. Behavioral economic analysis of drug preference using multiple choice procedure data.

    PubMed

    Greenwald, Mark K

    2008-01-11

    The multiple choice procedure has been used to evaluate preference for psychoactive drugs, relative to money amounts (price), in human subjects. The present re-analysis shows that MCP data are compatible with behavioral economic analysis of drug choices. Demand curves were constructed from studies with intravenous fentanyl, intramuscular hydromorphone and oral methadone in opioid-dependent individuals; oral d-amphetamine, oral MDMA alone and during fluoxetine treatment, and smoked marijuana alone or following naltrexone pretreatment in recreational drug users. For each participant and dose, the MCP crossover point was converted into unit price (UP) by dividing the money value ($) by the drug dose (mg/70kg). At the crossover value, the dose ceases to function as a reinforcer, so "0" was entered for this and higher UPs to reflect lack of drug choice. At lower UPs, the dose functions as a reinforcer and "1" was entered to reflect drug choice. Data for UP vs. average percent choice were plotted in log-log space to generate demand functions. Rank of order of opioid inelasticity (slope of non-linear regression) was: fentanyl>hydromorphone (continuing heroin users)>methadone>hydromorphone (heroin abstainers). Rank order of psychostimulant inelasticity was d-amphetamine>MDMA>MDMA+fluoxetine. Smoked marijuana was more inelastic with high-dose naltrexone. These findings show this method translates individuals' drug preferences into estimates of population demand, which has the potential to yield insights into pharmacotherapy efficacy, abuse liability assessment, and individual differences in susceptibility to drug abuse.

  6. Behavioral Economic Analysis of Drug Preference Using Multiple Choice Procedure Data

    PubMed Central

    Greenwald, Mark K.

    2008-01-01

    The Multiple Choice Procedure has been used to evaluate preference for psychoactive drugs, relative to money amounts (price), in human subjects. The present re-analysis shows that MCP data are compatible with behavioral economic analysis of drug choices. Demand curves were constructed from studies with intravenous fentanyl, intramuscular hydromorphone and oral methadone in opioid-dependent individuals; oral d-amphetamine, oral MDMA alone and during fluoxetine treatment, and smoked marijuana alone or following naltrexone pretreatment in recreational drug users. For each participant and dose, the MCP crossover point was converted into unit price (UP) by dividing the money value ($) by the drug dose (mg/70 kg). At the crossover value, the dose ceases to function as a reinforcer, so “0” was entered for this and higher UPs to reflect lack of drug choice. At lower UPs, the dose functions as a reinforcer and “1” was entered to reflect drug choice. Data for UP vs. average percent choice were plotted in log-log space to generate demand functions. Rank of order of opioid inelasticity (slope of non-linear regression) was: fentanyl > hydromorphone (continuing heroin users) > methadone > hydromorphone (heroin abstainers). Rank order of psychostimulant inelasticity was d-amphetamine > MDMA > MDMA + fluoxetine. Smoked marijuana was more inelastic with high-dose naltrexone. These findings show this method translates individuals’ drug preferences into estimates of population demand, which has the potential to yield insights into pharmacotherapy efficacy, abuse liability assessment, and individual differences in susceptibility to drug abuse. PMID:17949924

  7. A novel harmony search-K means hybrid algorithm for clustering gene expression data

    PubMed Central

    Nazeer, KA Abdul; Sebastian, MP; Kumar, SD Madhu

    2013-01-01

    Recent progress in bioinformatics research has led to the accumulation of huge quantities of biological data at various data sources. The DNA microarray technology makes it possible to simultaneously analyze large number of genes across different samples. Clustering of microarray data can reveal the hidden gene expression patterns from large quantities of expression data that in turn offers tremendous possibilities in functional genomics, comparative genomics, disease diagnosis and drug development. The k- ¬means clustering algorithm is widely used for many practical applications. But the original k-¬means algorithm has several drawbacks. It is computationally expensive and generates locally optimal solutions based on the random choice of the initial centroids. Several methods have been proposed in the literature for improving the performance of the k-¬means algorithm. A meta-heuristic optimization algorithm named harmony search helps find out near-global optimal solutions by searching the entire solution space. Low clustering accuracy of the existing algorithms limits their use in many crucial applications of life sciences. In this paper we propose a novel Harmony Search-K means Hybrid (HSKH) algorithm for clustering the gene expression data. Experimental results show that the proposed algorithm produces clusters with better accuracy in comparison with the existing algorithms. PMID:23390351

  8. A novel harmony search-K means hybrid algorithm for clustering gene expression data.

    PubMed

    Nazeer, Ka Abdul; Sebastian, Mp; Kumar, Sd Madhu

    2013-01-01

    Recent progress in bioinformatics research has led to the accumulation of huge quantities of biological data at various data sources. The DNA microarray technology makes it possible to simultaneously analyze large number of genes across different samples. Clustering of microarray data can reveal the hidden gene expression patterns from large quantities of expression data that in turn offers tremendous possibilities in functional genomics, comparative genomics, disease diagnosis and drug development. The k- ¬means clustering algorithm is widely used for many practical applications. But the original k-¬means algorithm has several drawbacks. It is computationally expensive and generates locally optimal solutions based on the random choice of the initial centroids. Several methods have been proposed in the literature for improving the performance of the k-¬means algorithm. A meta-heuristic optimization algorithm named harmony search helps find out near-global optimal solutions by searching the entire solution space. Low clustering accuracy of the existing algorithms limits their use in many crucial applications of life sciences. In this paper we propose a novel Harmony Search-K means Hybrid (HSKH) algorithm for clustering the gene expression data. Experimental results show that the proposed algorithm produces clusters with better accuracy in comparison with the existing algorithms.

  9. Matrix product operators, matrix product states, and ab initio density matrix renormalization group algorithms

    NASA Astrophysics Data System (ADS)

    Chan, Garnet Kin-Lic; Keselman, Anna; Nakatani, Naoki; Li, Zhendong; White, Steven R.

    2016-07-01

    Current descriptions of the ab initio density matrix renormalization group (DMRG) algorithm use two superficially different languages: an older language of the renormalization group and renormalized operators, and a more recent language of matrix product states and matrix product operators. The same algorithm can appear dramatically different when written in the two different vocabularies. In this work, we carefully describe the translation between the two languages in several contexts. First, we describe how to efficiently implement the ab initio DMRG sweep using a matrix product operator based code, and the equivalence to the original renormalized operator implementation. Next we describe how to implement the general matrix product operator/matrix product state algebra within a pure renormalized operator-based DMRG code. Finally, we discuss two improvements of the ab initio DMRG sweep algorithm motivated by matrix product operator language: Hamiltonian compression, and a sum over operators representation that allows for perfect computational parallelism. The connections and correspondences described here serve to link the future developments with the past and are important in the efficient implementation of continuing advances in ab initio DMRG and related algorithms.

  10. A dual-processor multi-frequency implementation of the FINDS algorithm

    NASA Technical Reports Server (NTRS)

    Godiwala, Pankaj M.; Caglayan, Alper K.

    1987-01-01

    This report presents a parallel processing implementation of the FINDS (Fault Inferring Nonlinear Detection System) algorithm on a dual processor configured target flight computer. First, a filter initialization scheme is presented which allows the no-fail filter (NFF) states to be initialized using the first iteration of the flight data. A modified failure isolation strategy, compatible with the new failure detection strategy reported earlier, is discussed and the performance of the new FDI algorithm is analyzed using flight recorded data from the NASA ATOPS B-737 aircraft in a Microwave Landing System (MLS) environment. The results show that low level MLS, IMU, and IAS sensor failures are detected and isolated instantaneously, while accelerometer and rate gyro failures continue to take comparatively longer to detect and isolate. The parallel implementation is accomplished by partitioning the FINDS algorithm into two parts: one based on the translational dynamics and the other based on the rotational kinematics. Finally, a multi-rate implementation of the algorithm is presented yielding significantly low execution times with acceptable estimation and FDI performance.

  11. Matrix product operators, matrix product states, and ab initio density matrix renormalization group algorithms.

    PubMed

    Chan, Garnet Kin-Lic; Keselman, Anna; Nakatani, Naoki; Li, Zhendong; White, Steven R

    2016-07-07

    Current descriptions of the ab initio density matrix renormalization group (DMRG) algorithm use two superficially different languages: an older language of the renormalization group and renormalized operators, and a more recent language of matrix product states and matrix product operators. The same algorithm can appear dramatically different when written in the two different vocabularies. In this work, we carefully describe the translation between the two languages in several contexts. First, we describe how to efficiently implement the ab initio DMRG sweep using a matrix product operator based code, and the equivalence to the original renormalized operator implementation. Next we describe how to implement the general matrix product operator/matrix product state algebra within a pure renormalized operator-based DMRG code. Finally, we discuss two improvements of the ab initio DMRG sweep algorithm motivated by matrix product operator language: Hamiltonian compression, and a sum over operators representation that allows for perfect computational parallelism. The connections and correspondences described here serve to link the future developments with the past and are important in the efficient implementation of continuing advances in ab initio DMRG and related algorithms.

  12. Comparison and optimization of radar-based hail detection algorithms in Slovenia

    NASA Astrophysics Data System (ADS)

    Stržinar, Gregor; Skok, Gregor

    2018-05-01

    Four commonly used radar-based hail detection algorithms are evaluated and optimized in Slovenia. The algorithms are verified against ground observations of hail at manned stations in the period between May and August, from 2002 to 2010. The algorithms are optimized by determining the optimal values of all possible algorithm parameters. A number of different contingency-table-based scores are evaluated with a combination of Critical Success Index and frequency bias proving to be the best choice for optimization. The best performance indexes are given by Waldvogel and the severe hail index, followed by vertically integrated liquid and maximum radar reflectivity. Using the optimal parameter values, a hail frequency climatology map for the whole of Slovenia is produced. The analysis shows that there is a considerable variability of hail occurrence within the Republic of Slovenia. The hail frequency ranges from almost 0 to 1.7 hail days per year with an average value of about 0.7 hail days per year.

  13. A novel global Harmony Search method based on Ant Colony Optimisation algorithm

    NASA Astrophysics Data System (ADS)

    Fouad, Allouani; Boukhetala, Djamel; Boudjema, Fares; Zenger, Kai; Gao, Xiao-Zhi

    2016-03-01

    The Global-best Harmony Search (GHS) is a stochastic optimisation algorithm recently developed, which hybridises the Harmony Search (HS) method with the concept of swarm intelligence in the particle swarm optimisation (PSO) to enhance its performance. In this article, a new optimisation algorithm called GHSACO is developed by incorporating the GHS with the Ant Colony Optimisation algorithm (ACO). Our method introduces a novel improvisation process, which is different from that of the GHS in the following aspects. (i) A modified harmony memory (HM) representation and conception. (ii) The use of a global random switching mechanism to monitor the choice between the ACO and GHS. (iii) An additional memory consideration selection rule using the ACO random proportional transition rule with a pheromone trail update mechanism. The proposed GHSACO algorithm has been applied to various benchmark functions and constrained optimisation problems. Simulation results demonstrate that it can find significantly better solutions when compared with the original HS and some of its variants.

  14. A curvilinear, fully implicit, conservative electromagnetic PIC algorithm in multiple dimensions

    DOE PAGES

    Chacon, L.; Chen, G.

    2016-04-19

    Here, we extend a recently proposed fully implicit PIC algorithm for the Vlasov–Darwin model in multiple dimensions (Chen and Chacón (2015) [1]) to curvilinear geometry. As in the Cartesian case, the approach is based on a potential formulation (Φ, A), and overcomes many difficulties of traditional semi-implicit Darwin PIC algorithms. Conservation theorems for local charge and global energy are derived in curvilinear representation, and then enforced discretely by a careful choice of the discretization of field and particle equations. Additionally, the algorithm conserves canonical-momentum in any ignorable direction, and preserves the Coulomb gauge ∇ • A = 0 exactly. Anmore » asymptotically well-posed fluid preconditioner allows efficient use of large cell sizes, which are determined by accuracy considerations, not stability, and can be orders of magnitude larger than required in a standard explicit electromagnetic PIC simulation. We demonstrate the accuracy and efficiency properties of the algorithm with numerical experiments in mapped meshes in 1D-3V and 2D-3V.« less

  15. A curvilinear, fully implicit, conservative electromagnetic PIC algorithm in multiple dimensions

    NASA Astrophysics Data System (ADS)

    Chacón, L.; Chen, G.

    2016-07-01

    We extend a recently proposed fully implicit PIC algorithm for the Vlasov-Darwin model in multiple dimensions (Chen and Chacón (2015) [1]) to curvilinear geometry. As in the Cartesian case, the approach is based on a potential formulation (ϕ, A), and overcomes many difficulties of traditional semi-implicit Darwin PIC algorithms. Conservation theorems for local charge and global energy are derived in curvilinear representation, and then enforced discretely by a careful choice of the discretization of field and particle equations. Additionally, the algorithm conserves canonical-momentum in any ignorable direction, and preserves the Coulomb gauge ∇ ṡ A = 0 exactly. An asymptotically well-posed fluid preconditioner allows efficient use of large cell sizes, which are determined by accuracy considerations, not stability, and can be orders of magnitude larger than required in a standard explicit electromagnetic PIC simulation. We demonstrate the accuracy and efficiency properties of the algorithm with numerical experiments in mapped meshes in 1D-3V and 2D-3V.

  16. A curvilinear, fully implicit, conservative electromagnetic PIC algorithm in multiple dimensions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chacon, L.; Chen, G.

    Here, we extend a recently proposed fully implicit PIC algorithm for the Vlasov–Darwin model in multiple dimensions (Chen and Chacón (2015) [1]) to curvilinear geometry. As in the Cartesian case, the approach is based on a potential formulation (Φ, A), and overcomes many difficulties of traditional semi-implicit Darwin PIC algorithms. Conservation theorems for local charge and global energy are derived in curvilinear representation, and then enforced discretely by a careful choice of the discretization of field and particle equations. Additionally, the algorithm conserves canonical-momentum in any ignorable direction, and preserves the Coulomb gauge ∇ • A = 0 exactly. Anmore » asymptotically well-posed fluid preconditioner allows efficient use of large cell sizes, which are determined by accuracy considerations, not stability, and can be orders of magnitude larger than required in a standard explicit electromagnetic PIC simulation. We demonstrate the accuracy and efficiency properties of the algorithm with numerical experiments in mapped meshes in 1D-3V and 2D-3V.« less

  17. Learning in fully recurrent neural networks by approaching tangent planes to constraint surfaces.

    PubMed

    May, P; Zhou, E; Lee, C W

    2012-10-01

    In this paper we present a new variant of the online real time recurrent learning algorithm proposed by Williams and Zipser (1989). Whilst the original algorithm utilises gradient information to guide the search towards the minimum training error, it is very slow in most applications and often gets stuck in local minima of the search space. It is also sensitive to the choice of learning rate and requires careful tuning. The new variant adjusts weights by moving to the tangent planes to constraint surfaces. It is simple to implement and requires no parameters to be set manually. Experimental results show that this new algorithm gives significantly faster convergence whilst avoiding problems like local minima. Copyright © 2012 Elsevier Ltd. All rights reserved.

  18. Chaotic map clustering algorithm for EEG analysis

    NASA Astrophysics Data System (ADS)

    Bellotti, R.; De Carlo, F.; Stramaglia, S.

    2004-03-01

    The non-parametric chaotic map clustering algorithm has been applied to the analysis of electroencephalographic signals, in order to recognize the Huntington's disease, one of the most dangerous pathologies of the central nervous system. The performance of the method has been compared with those obtained through parametric algorithms, as K-means and deterministic annealing, and supervised multi-layer perceptron. While supervised neural networks need a training phase, performed by means of data tagged by the genetic test, and the parametric methods require a prior choice of the number of classes to find, the chaotic map clustering gives a natural evidence of the pathological class, without any training or supervision, thus providing a new efficient methodology for the recognition of patterns affected by the Huntington's disease.

  19. Isotopic identification using Pulse Shape Analysis of current signals from silicon detectors: Recent results from the FAZIA collaboration

    NASA Astrophysics Data System (ADS)

    Pastore, G.; Gruyer, D.; Ottanelli, P.; Le Neindre, N.; Pasquali, G.; Alba, R.; Barlini, S.; Bini, M.; Bonnet, E.; Borderie, B.; Bougault, R.; Bruno, M.; Casini, G.; Chbihi, A.; Dell'Aquila, D.; Dueñas, J. A.; Fabris, D.; Francalanza, L.; Frankland, J. D.; Gramegna, F.; Henri, M.; Kordyasz, A.; Kozik, T.; Lombardo, I.; Lopez, O.; Morelli, L.; Olmi, A.; Pârlog, M.; Piantelli, S.; Poggi, G.; Santonocito, D.; Stefanini, A. A.; Valdré, S.; Verde, G.; Vient, E.; Vigilante, M.; FAZIA Collaboration

    2017-07-01

    The FAZIA apparatus exploits Pulse Shape Analysis (PSA) to identify nuclear fragments stopped in the first layer of a Silicon-Silicon-CsI(Tl) detector telescope. In this work, for the first time, we show that the isotopes of fragments having atomic number as high as Z∼20 can be identified. Such a remarkable result has been obtained thanks to a careful construction of the Si detectors and to the use of low noise and high performance digitizing electronics. Moreover, optimized PSA algorithms are needed. This work deals with the choice of the best algorithm for PSA of current signals. A smoothing spline algorithm is demonstrated to give optimal results without requiring too much computational resources.

  20. Comparison of multiobjective evolutionary algorithms for operations scheduling under machine availability constraints.

    PubMed

    Frutos, M; Méndez, M; Tohmé, F; Broz, D

    2013-01-01

    Many of the problems that arise in production systems can be handled with multiobjective techniques. One of those problems is that of scheduling operations subject to constraints on the availability of machines and buffer capacity. In this paper we analyze different Evolutionary multiobjective Algorithms (MOEAs) for this kind of problems. We consider an experimental framework in which we schedule production operations for four real world Job-Shop contexts using three algorithms, NSGAII, SPEA2, and IBEA. Using two performance indexes, Hypervolume and R2, we found that SPEA2 and IBEA are the most efficient for the tasks at hand. On the other hand IBEA seems to be a better choice of tool since it yields more solutions in the approximate Pareto frontier.

  1. [Crohn's disease surgery].

    PubMed

    Kala, Zdeněk; Marek, Filip; Válek, Vlastimil A; Bartušek, Daniel

    2014-01-01

    Surgery of Crohns disease is an important part of the general treatment algorithm. The role of surgery is changing with the development of conservative procedures. The recent years have seen the return to early treatment of patients with Crohns disease. Given the character of the disease and its intestinal symptoms, a specific approach to these patients is necessary, especially regarding the correct choice of surgery. The paper focuses on the luminal damage of the small and large intestine including complications of the disease. We describe the individual indications for a surgical solution, including the choice of anastomosis or multiple / repeated surgeries.

  2. Evaluating the accuracy performance of Lucas-Kanade algorithm in the circumstance of PIV application

    NASA Astrophysics Data System (ADS)

    Pan, Chong; Xue, Dong; Xu, Yang; Wang, JinJun; Wei, RunJie

    2015-10-01

    Lucas-Kanade (LK) algorithm, usually used in optical flow filed, has recently received increasing attention from PIV community due to its advanced calculation efficiency by GPU acceleration. Although applications of this algorithm are continuously emerging, a systematic performance evaluation is still lacking. This forms the primary aim of the present work. Three warping schemes in the family of LK algorithm: forward/inverse/symmetric warping, are evaluated in a prototype flow of a hierarchy of multiple two-dimensional vortices. Second-order Newton descent is also considered here. The accuracy & efficiency of all these LK variants are investigated under a large domain of various influential parameters. It is found that the constant displacement constraint, which is a necessary building block for GPU acceleration, is the most critical issue in affecting LK algorithm's accuracy, which can be somehow ameliorated by using second-order Newton descent. Moreover, symmetric warping outbids the other two warping schemes in accuracy level, robustness to noise, convergence speed and tolerance to displacement gradient, and might be the first choice when applying LK algorithm to PIV measurement.

  3. An analytical particle mover for the charge- and energy-conserving, nonlinearly implicit, electrostatic particle-in-cell algorithm

    NASA Astrophysics Data System (ADS)

    Chen, G.; Chacón, L.

    2013-08-01

    We propose a 1D analytical particle mover for the recent charge- and energy-conserving electrostatic particle-in-cell (PIC) algorithm in Ref. [G. Chen, L. Chacón, D.C. Barnes, An energy- and charge-conserving, implicit, electrostatic particle-in-cell algorithm, Journal of Computational Physics 230 (2011) 7018-7036]. The approach computes particle orbits exactly for a given piece-wise linear electric field. The resulting PIC algorithm maintains the exact charge and energy conservation properties of the original algorithm, but with improved performance (both in efficiency and robustness against the number of particles and timestep). We demonstrate the advantageous properties of the scheme with a challenging multiscale numerical test case, the ion acoustic wave. Using the analytical mover as a reference, we demonstrate that the choice of error estimator in the Crank-Nicolson mover has significant impact on the overall performance of the implicit PIC algorithm. The generalization of the approach to the multi-dimensional case is outlined, based on a novel and simple charge conserving interpolation scheme.

  4. Discrete Event-based Performance Prediction for Temperature Accelerated Dynamics

    NASA Astrophysics Data System (ADS)

    Junghans, Christoph; Mniszewski, Susan; Voter, Arthur; Perez, Danny; Eidenbenz, Stephan

    2014-03-01

    We present an example of a new class of tools that we call application simulators, parameterized fast-running proxies of large-scale scientific applications using parallel discrete event simulation (PDES). We demonstrate our approach with a TADSim application simulator that models the Temperature Accelerated Dynamics (TAD) method, which is an algorithmically complex member of the Accelerated Molecular Dynamics (AMD) family. The essence of the TAD application is captured without the computational expense and resource usage of the full code. We use TADSim to quickly characterize the runtime performance and algorithmic behavior for the otherwise long-running simulation code. We further extend TADSim to model algorithm extensions to standard TAD, such as speculative spawning of the compute-bound stages of the algorithm, and predict performance improvements without having to implement such a method. Focused parameter scans have allowed us to study algorithm parameter choices over far more scenarios than would be possible with the actual simulation. This has led to interesting performance-related insights into the TAD algorithm behavior and suggested extensions to the TAD method.

  5. Automated isotope identification algorithm using artificial neural networks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kamuda, Mark; Stinnett, Jacob; Sullivan, Clair

    There is a need to develop an algorithm that can determine the relative activities of radio-isotopes in a large dataset of low-resolution gamma-ray spectra that contains a mixture of many radio-isotopes. Low-resolution gamma-ray spectra that contain mixtures of radio-isotopes often exhibit feature over-lap, requiring algorithms that can analyze these features when overlap occurs. While machine learning and pattern recognition algorithms have shown promise for the problem of radio-isotope identification, their ability to identify and quantify mixtures of radio-isotopes has not been studied. Because machine learning algorithms use abstract features of the spectrum, such as the shape of overlapping peaks andmore » Compton continuum, they are a natural choice for analyzing radio-isotope mixtures. An artificial neural network (ANN) has be trained to calculate the relative activities of 32 radio-isotopes in a spectrum. Furthermore, the ANN is trained with simulated gamma-ray spectra, allowing easy expansion of the library of target radio-isotopes. In this paper we present our initial algorithms based on an ANN and evaluate them against a series measured and simulated spectra.« less

  6. Automated isotope identification algorithm using artificial neural networks

    DOE PAGES

    Kamuda, Mark; Stinnett, Jacob; Sullivan, Clair

    2017-04-12

    There is a need to develop an algorithm that can determine the relative activities of radio-isotopes in a large dataset of low-resolution gamma-ray spectra that contains a mixture of many radio-isotopes. Low-resolution gamma-ray spectra that contain mixtures of radio-isotopes often exhibit feature over-lap, requiring algorithms that can analyze these features when overlap occurs. While machine learning and pattern recognition algorithms have shown promise for the problem of radio-isotope identification, their ability to identify and quantify mixtures of radio-isotopes has not been studied. Because machine learning algorithms use abstract features of the spectrum, such as the shape of overlapping peaks andmore » Compton continuum, they are a natural choice for analyzing radio-isotope mixtures. An artificial neural network (ANN) has be trained to calculate the relative activities of 32 radio-isotopes in a spectrum. Furthermore, the ANN is trained with simulated gamma-ray spectra, allowing easy expansion of the library of target radio-isotopes. In this paper we present our initial algorithms based on an ANN and evaluate them against a series measured and simulated spectra.« less

  7. Genetic algorithms for multicriteria shape optimization of induction furnace

    NASA Astrophysics Data System (ADS)

    Kůs, Pavel; Mach, František; Karban, Pavel; Doležel, Ivo

    2012-09-01

    In this contribution we deal with a multi-criteria shape optimization of an induction furnace. We want to find shape parameters of the furnace in such a way, that two different criteria are optimized. Since they cannot be optimized simultaneously, instead of one optimum we find set of partially optimal designs, so called Pareto front. We compare two different approaches to the optimization, one using nonlinear conjugate gradient method and second using variation of genetic algorithm. As can be seen from the numerical results, genetic algorithm seems to be the right choice for this problem. Solution of direct problem (coupled problem consisting of magnetic and heat field) is done using our own code Agros2D. It uses finite elements of higher order leading to fast and accurate solution of relatively complicated coupled problem. It also provides advanced scripting support, allowing us to prepare parametric model of the furnace and simply incorporate various types of optimization algorithms.

  8. A 2D range Hausdorff approach to 3D facial recognition.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Koch, Mark William; Russ, Trina Denise; Little, Charles Quentin

    2004-11-01

    This paper presents a 3D facial recognition algorithm based on the Hausdorff distance metric. The standard 3D formulation of the Hausdorff matching algorithm has been modified to operate on a 2D range image, enabling a reduction in computation from O(N2) to O(N) without large storage requirements. The Hausdorff distance is known for its robustness to data outliers and inconsistent data between two data sets, making it a suitable choice for dealing with the inherent problems in many 3D datasets due to sensor noise and object self-occlusion. For optimal performance, the algorithm assumes a good initial alignment between probe and templatemore » datasets. However, to minimize the error between two faces, the alignment can be iteratively refined. Results from the algorithm are presented using 3D face images from the Face Recognition Grand Challenge database version 1.0.« less

  9. Identification of observer/Kalman filter Markov parameters: Theory and experiments

    NASA Technical Reports Server (NTRS)

    Juang, Jer-Nan; Phan, Minh; Horta, Lucas G.; Longman, Richard W.

    1991-01-01

    An algorithm to compute Markov parameters of an observer or Kalman filter from experimental input and output data is discussed. The Markov parameters can then be used for identification of a state space representation, with associated Kalman gain or observer gain, for the purpose of controller design. The algorithm is a non-recursive matrix version of two recursive algorithms developed in previous works for different purposes. The relationship between these other algorithms is developed. The new matrix formulation here gives insight into the existence and uniqueness of solutions of certain equations and gives bounds on the proper choice of observer order. It is shown that if one uses data containing noise, and seeks the fastest possible deterministic observer, the deadbeat observer, one instead obtains the Kalman filter, which is the fastest possible observer in the stochastic environment. Results are demonstrated in numerical studies and in experiments on an ten-bay truss structure.

  10. A novel algorithm of super-resolution image reconstruction based on multi-class dictionaries for natural scene

    NASA Astrophysics Data System (ADS)

    Wu, Wei; Zhao, Dewei; Zhang, Huan

    2015-12-01

    Super-resolution image reconstruction is an effective method to improve the image quality. It has important research significance in the field of image processing. However, the choice of the dictionary directly affects the efficiency of image reconstruction. A sparse representation theory is introduced into the problem of the nearest neighbor selection. Based on the sparse representation of super-resolution image reconstruction method, a super-resolution image reconstruction algorithm based on multi-class dictionary is analyzed. This method avoids the redundancy problem of only training a hyper complete dictionary, and makes the sub-dictionary more representatives, and then replaces the traditional Euclidean distance computing method to improve the quality of the whole image reconstruction. In addition, the ill-posed problem is introduced into non-local self-similarity regularization. Experimental results show that the algorithm is much better results than state-of-the-art algorithm in terms of both PSNR and visual perception.

  11. Utilization of Ancillary Data Sets for SMAP Algorithm Development and Product Generation

    NASA Technical Reports Server (NTRS)

    ONeill, P.; Podest, E.; Njoku, E.

    2011-01-01

    Algorithms being developed for the Soil Moisture Active Passive (SMAP) mission require a variety of both static and ancillary data. The selection of the most appropriate source for each ancillary data parameter is driven by a number of considerations, including accuracy, latency, availability, and consistency across all SMAP products and with SMOS (Soil Moisture Ocean Salinity). It is anticipated that initial selection of all ancillary datasets, which are needed for ongoing algorithm development activities on the SMAP algorithm testbed at JPL, will be completed within the year. These datasets will be updated as new or improved sources become available, and all selections and changes will be documented for the benefit of the user community. Wise choices in ancillary data will help to enable SMAP to provide new global measurements of soil moisture and freeze/thaw state at the targeted accuracy necessary to tackle hydrologically-relevant societal issues.

  12. Dichotomy in the definition of prescriptive information suggests both prescribed data and prescribed algorithms: biosemiotics applications in genomic systems.

    PubMed

    D'Onofrio, David J; Abel, David L; Johnson, Donald E

    2012-03-14

    The fields of molecular biology and computer science have cooperated over recent years to create a synergy between the cybernetic and biosemiotic relationship found in cellular genomics to that of information and language found in computational systems. Biological information frequently manifests its "meaning" through instruction or actual production of formal bio-function. Such information is called prescriptive information (PI). PI programs organize and execute a prescribed set of choices. Closer examination of this term in cellular systems has led to a dichotomy in its definition suggesting both prescribed data and prescribed algorithms are constituents of PI. This paper looks at this dichotomy as expressed in both the genetic code and in the central dogma of protein synthesis. An example of a genetic algorithm is modeled after the ribosome, and an examination of the protein synthesis process is used to differentiate PI data from PI algorithms.

  13. Methods for coherent lensless imaging and X-ray wavefront measurements

    NASA Astrophysics Data System (ADS)

    Guizar Sicairos, Manuel

    X-ray diffractive imaging is set apart from other high-resolution imaging techniques (e.g. scanning electron or atomic force microscopy) for its high penetration depth, which enables tomographic 3D imaging of thick samples and buried structures. Furthermore, using short x-ray pulses, it enables the capability to take ultrafast snapshots, giving a unique opportunity to probe nanoscale dynamics at femtosecond time scales. In this thesis we present improvements to phase retrieval algorithms, assess their performance through numerical simulations, and develop new methods for both imaging and wavefront measurement. Building on the original work by Faulkner and Rodenburg, we developed an improved reconstruction algorithm for phase retrieval with transverse translations of the object relative to the illumination beam. Based on gradient-based nonlinear optimization, this algorithm is capable of estimating the object, and at the same time refining the initial knowledge of the incident illumination and the object translations. The advantages of this algorithm over the original iterative transform approach are shown through numerical simulations. Phase retrieval has already shown substantial success in wavefront sensing at optical wavelengths. Although in principle the algorithms can be used at any wavelength, in practice the focus-diversity mechanism that makes optical phase retrieval robust is not practical to implement for x-rays. In this thesis we also describe the novel application of phase retrieval with transverse translations to the problem of x-ray wavefront sensing. This approach allows the characterization of the complex-valued x-ray field in-situ and at-wavelength and has several practical and algorithmic advantages over conventional focused beam measurement techniques. A few of these advantages include improved robustness through diverse measurements, reconstruction from far-field intensity measurements only, and significant relaxation of experimental requirements over other beam characterization approaches. Furthermore, we show that a one-dimensional version of this technique can be used to characterize an x-ray line focus produced by a cylindrical focusing element. We provide experimental demonstrations of the latter at hard x-ray wavelengths, where we have characterized the beams focused by a kinoform lens and an elliptical mirror. In both experiments the reconstructions exhibited good agreement with independent measurements, and in the latter a small mirror misalignment was inferred from the phase retrieval reconstruction. These experiments pave the way for the application of robust phase retrieval algorithms for in-situ alignment and performance characterization of x-ray optics for nanofocusing. We also present a study on how transverse translations help with the well-known uniqueness problem of one-dimensional phase retrieval. We also present a novel method for x-ray holography that is capable of reconstructing an image using an off-axis extended reference in a non-iterative computation, greatly generalizing an earlier approach by Podorov et al. The approach, based on the numerical application of derivatives on the field autocorrelation, was developed from first mathematical principles. We conducted a thorough theoretical study to develop technical and intuitive understanding of this technique and derived sufficient separation conditions required for an artifact-free reconstruction. We studied the effects of missing information in the Fourier domain, and of an imperfect reference, and we provide a signal-to-noise ratio comparison with the more traditional approach of Fourier transform holography. We demonstrated this new holographic approach through proof-of-principle optical experiments and later experimentally at soft x-ray wavelengths, where we compared its performance to Fourier transform holography, iterative phase retrieval and state-of-the-art zone-plate x-ray imaging techniques (scanning and full-field). Finally, we present a demonstration of the technique using a single 20 fs pulse from a high-harmonic table-top source. Holography with an extended reference is shown to provide fast, good quality images that are robust to noise and artifacts that arise from missing information due to a beam stop. (Abstract shortened by UMI.)

  14. 6.7 radio sky mapping from satellites at very low frequencies

    NASA Technical Reports Server (NTRS)

    Storey, L. R. O.

    1991-01-01

    Wave Distribution Function (WDF) analysis is a procedure for making sky maps of the sources of natural electromagnetic waves in space plasmas, given local measurements of some or all of the three magnetic and three electric field components. The work that still needs to be done on this subject includes solving basic methodological problems, translating the solution into efficient algorithms, and embodying the algorithms in computer software. One important scientific use of WDF analysis is to identify the mode of origin of plasmaspheric hiss. Some of the data from the Japanese satellite Akebono (EXOS D) are likely to be suitable for this purpose.

  15. Radio sky mapping from satellites at very low frequencies

    NASA Technical Reports Server (NTRS)

    Storey, L. R. O.

    1991-01-01

    Wave Distribution Function (WDF) analysis is a procedure for making sky maps of the sources of natural electromagnetic waves in space plasmas, given local measurements of some or all of the three magnetic and three electric field components. The work that still needs to be done on this subject includes solving basic methodological problems, translating the solution into efficient algorithms, and embodying the algorithms in computer software. One important scientific use of WDF analysis is to identify the mode of origin of plasmaspheric hiss. Some of the data from the Japanese satellite Akebono (EXOS D) are likely to be suitable for this purpose.

  16. Model-based vision using geometric hashing

    NASA Astrophysics Data System (ADS)

    Akerman, Alexander, III; Patton, Ronald

    1991-04-01

    The Geometric Hashing technique developed by the NYU Courant Institute has been applied to various automatic target recognition applications. In particular, I-MATH has extended the hashing algorithm to perform automatic target recognition ofsynthetic aperture radar (SAR) imagery. For this application, the hashing is performed upon the geometric locations of dominant scatterers. In addition to being a robust model-based matching algorithm -- invariant under translation, scale, and 3D rotations of the target -- hashing is of particular utility because it can still perform effective matching when the target is partially obscured. Moreover, hashing is very amenable to a SIMD parallel processing architecture, and thus potentially realtime implementable.

  17. FPGA implementation of digital down converter using CORDIC algorithm

    NASA Astrophysics Data System (ADS)

    Agarwal, Ashok; Lakshmi, Boppana

    2013-01-01

    In radio receivers, Digital Down Converters (DDC) are used to translate the signal from Intermediate Frequency level to baseband. It also decimates the oversampled signal to a lower sample rate, eliminating the need of a high end digital signal processors. In this paper we have implemented architecture for DDC employing CORDIC algorithm, which down converts an IF signal of 70MHz (3G) to 200 KHz baseband GSM signal, with an SFDR greater than 100dB. The implemented architecture reduces the hardware resource requirements by 15 percent when compared with other architecture available in the literature due to elimination of explicit multipliers and a quadrature phase shifter for mixing.

  18. Embedded Relative Navigation Sensor Fusion Algorithms for Autonomous Rendezvous and Docking Missions

    NASA Technical Reports Server (NTRS)

    DeKock, Brandon K.; Betts, Kevin M.; McDuffie, James H.; Dreas, Christine B.

    2008-01-01

    bd Systems (a subsidiary of SAIC) has developed a suite of embedded relative navigation sensor fusion algorithms to enable NASA autonomous rendezvous and docking (AR&D) missions. Translational and rotational Extended Kalman Filters (EKFs) were developed for integrating measurements based on the vehicles' orbital mechanics and high-fidelity sensor error models and provide a solution with increased accuracy and robustness relative to any single relative navigation sensor. The filters were tested tinough stand-alone covariance analysis, closed-loop testing with a high-fidelity multi-body orbital simulation, and hardware-in-the-loop (HWIL) testing in the Marshall Space Flight Center (MSFC) Flight Robotics Laboratory (FRL).

  19. The continuum fusion theory of signal detection applied to a bi-modal fusion problem

    NASA Astrophysics Data System (ADS)

    Schaum, A.

    2011-05-01

    A new formalism has been developed that produces detection algorithms for model-based problems, in which one or more parameter values is unknown. Continuum Fusion can be used to generate different flavors of algorithm for any composite hypothesis testing problem. The methodology is defined by a fusion logic that can be translated into max/min conditions. Here it is applied to a simple sensor fusion model, but one for which the generalized likelihood ratio test is intractable. By contrast, a fusion-based response to the same problem can be devised that is solvable in closed form and represents a good approximation to the GLR test.

  20. Regenerative Endodontics: Barriers and Strategies for Clinical Translation

    PubMed Central

    Kim, Sahng G.; Zhou, Jian; Ye, Ling; Cho, Shoko; Suzuki, Takahiro; Fu, Susan Y.; Yang, Rujing; Zhou, Xuedong; Mao, Jeremy J.

    2014-01-01

    SYNOPSIS Despite a great deal of enthusiasm and effort, regenerative endodontics has encountered substantial challenges towards clinical translation. Recent adoption by the American Dental Association (ADA) of evoked pulp bleeding in immature permanent teeth is an important step for regenerative endodontics. However, there is no regenerative therapy for the majority of endodontic diseases. Simple recapitulation of cell therapy and tissue engineering strategies that are under development for other organ systems has not led to clinical translation in regeneration endodontics. Dental pulp stem cells may appear to be a priori choice for dental pulp regeneration. However, dental pulp stem cells may not be available in a patient who is in need of pulp regeneration. Even if dental pulp stem cells are available autologously or perhaps allogeneically, one must address a multitude of scientific, regulatory and commercialization barriers, and unless these issues are resolved, transplantation of dental pulp stem cells will remain a scientific exercise, rather than a clinical reality. Recent work using novel biomaterial scaffolds and growth factors that orchestrate the homing of host endogenous cells represents a departure from traditional cell transplantation approaches and may accelerate clinical translation. Given the functions and scale of dental pulp and dentin, regenerative endodontics is poised to become one of the early biological solutions in regenerative dental medicine. PMID:22835543

Top